Binary files /tmp/tmp0y62i6qc/MMLPbmOUHR/ubuntu-advantage-tools-27.8~16.04.1/.assets/circle_of_friends.png and /tmp/tmp0y62i6qc/BX8h3lHibs/ubuntu-advantage-tools-27.9~16.04.1/.assets/circle_of_friends.png differ diff -Nru ubuntu-advantage-tools-27.8~16.04.1/contributing-docs/howtoguides/how_to_release_a_new_version_of_ua.md ubuntu-advantage-tools-27.9~16.04.1/contributing-docs/howtoguides/how_to_release_a_new_version_of_ua.md --- ubuntu-advantage-tools-27.8~16.04.1/contributing-docs/howtoguides/how_to_release_a_new_version_of_ua.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/contributing-docs/howtoguides/how_to_release_a_new_version_of_ua.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,262 @@ +# Ubuntu Advantage Client Releases + +## Supported Ubuntu Releases + +See the table under "Support Matrix for the client" in the [readme](./README.md). + +## Release versioning schemes + +Below are the versioning schemes used for publishing debs: + +| Build target | Version Format | +| --------------------------------------------------------------------------------- | ------------------------------------------ | +| [Daily PPA](https://code.launchpad.net/~canonical-server/+recipe/ua-client-daily) | `XX.YY-~g~ubuntu22.04.1` | +| Staging PPA | `XX.YY~22.04.1~rc1` | +| Stable PPA | `XX.YY~22.04.1~stableppa1` | +| Archive release | `XX.YY~22.04.1` | +| Archive bugfix release | `XX.YY.Z~22.04.1` | + +## Supported upgrade paths on same upstream version + +Regardless of source, the latest available "upstream version" (e.g. 27.4) will always be installed, because the upstream version comes first followed by a tilde in all version formats. + +This table demonstrates upgrade paths between sources for one particular upstream version. + +| Upgrade path | Version diff example | +| ------------------------------- | ----------------------------------------------------------------------- | +| Staging to Next Staging rev | `31.4~22.04.1~rc1` ➜ `31.4~22.04.1~rc2` | +| Staging to Stable | `31.4~22.04.1~rc2` ➜ `31.4~22.04.1~stableppa1` | +| Stable to Next Stable rev | `31.4~22.04.1~stableppa1` ➜ `31.4~22.04.1~stableppa2` | +| Stable to Archive | `31.4~22.04.1~stableppa2` ➜ `31.4~22.04.1` | +| LTS Archive to Next LTS Archive | `31.4~22.04.1` ➜ `31.4~24.04.1` | +| Archive to Daily | `31.4~24.04.1` ➜ `31.4-1500~g75fa134~ubuntu24.04.1` | +| Daily to Next Daily | `31.4-1500~g75fa134~ubuntu24.04.1` ➜ `31.4-1501~g3836375~ubuntu24.04.1` | + +## Process + + +### Background + +The release process for ubuntu-advantage-tools has three overarching steps/goals. + +1. Release to our team infrastructure. This includes Github and the `ua-client` PPAs. +2. Release to the latest ubuntu devel release. +3. Release to the supported ubuntu past releases via [SRU](https://wiki.ubuntu.com/StableReleaseUpdates) using the [ubuntu-advantage-tools specific SRU process](https://wiki.ubuntu.com/UbuntuAdvantageToolsUpdates). + +Generally speaking, these steps happen in order, but there is some overlap. Also we may backtrack if issues are found part way through the process. + +An average release should take somewhere between 10 and 14 calendar days if things go smoothly, starting at the decision to release and ending at the new version being available in all supported ubuntu releases. Note that it is not 2 weeks of full time work. Most of the time is spent waiting for review or sitting in proposed. + +### Prerequisites + +If this is your first time releasing ubuntu-advantage-tools, you'll need to do the following before getting started: + +* Add the team helper scripts to your PATH: [uss-tableflip](https://github.com/canonical/uss-tableflip). +* If you don't yet have a gpg key set up, follow the instructions + [here](https://help.launchpad.net/YourAccount/ImportingYourPGPKey) to create a key, + publish it to `hkp://keyserver.ubuntu.com`, and import it into Launchpad. +* Before you run `sbuild-it` for the first time, you'll need to set up a chroot for each Ubuntu release. + Run the following to set up chroots with dependencies pre-installed for each release: + ```bash + apt-get install sbuild-launchpad-chroot + bash ./tools/setup_sbuild.sh # This will give you usage information on how to call it with the correct parameters + ``` +* You must have launchpad already properly configured in your system in order to upload packages to the PPAs. Follow [this guide](https://help.launchpad.net/Packaging/PPA/Uploading) to get set up. + +### I. Preliminary/staging release to team infrastructure +1. Create a release PR + + a. Move the desired commits from our `main` branch onto the desired release branch + + * This step is currently not well defined. We currently are using `release-27` for all `27.X` releases and have been cherry-picking/rebasing all commits from `main` into this branch for a release. + + b Create a new entry in the `debian/changelog` file: + + * You can do that by running ` dch --newversion ` + * Remember to update the release from `UNRELEASED` to the ubuntu/devel release. Edit the version to look like: `27.2~21.10.1`, with the appropriate ua and ubuntu/devel version numbers. + * Populate `debian/changelog` with the commits you have cherry-picked + * You can do that by running `git log .. | log2dch` + * This will generate a list of commits that could be included in the changelog. + * You don't need to include all of the commits generated. Remember that the changelog should + be read by the user to understand the new features/modifications in the package. If you + think a commit will not add that much to the user experience, you can drop it from the + changelog + * To structure the changelog you can use the other entries as example. But we basically try to + keep this order: debian changes, new features/modifications, testing. Within each section, bullet points should be alphabetized. + + c. Create a PR on github into the release branch. Ask in the UA channel on mattermost for review. + + d. When reviewing the release PR, please use the following guidelines when reviewing the new changelog entry: + + * Is the version correctly updated ? We must ensure that the new version on the changelog is + correct and it also targets the latest Ubuntu release at the moment. + * Is the entry useful for the user ? The changelog entries should be user focused, meaning + that we should only add entries that we think users will care about (i.e. we don't need + entries when fixing a test, as this doesn't provide meaningful information to the user) + * Is this entry redundant ? Sometimes we may have changes that affect separate modules of the + code. We should have an entry only for the module that was most affected by it + * Is the changelog entry unique ? We need to verify that the changelog entry is not already + reflected in an earlier version of the changelog. If it is, we need not only to remove but double + check the process we are using to cherry-pick the commits + * Is this entry actually reflected on the code ? Sometimes, we can have changelog entries + that are not reflected in the code anymore. This can happen during development when we are + still unsure about the behavior of a feature or when we fix a bug that removes the code + that was added. We must verify each changelog entry that is added to be sure of their + presence in the product. + +2. After the release PR is merged, tag the head of the release branch with the version number, e.g. `27.1`. Push this tag to Github. + +3. Build the package for all Ubuntu releases and upload to `ppa:ua-client/staging` + + a. Clone the repository in a clean directory and switch to the release branch + * *WARNING* Build the package in a clean environment. The reason for that is because the package + will contain everything that it is present in the folder. If you are storing credentials or + other sensible development information in your folder, they will be uploaded too when we send + the package to the ppa. A clean environment is the safest way to perform this. + + b. Edit the changelog: + * List yourself as the author of this release. + * Edit the version number to look like: `27.2~20.04.1~rc1` (`~.~rc`) + * Edit the ubuntu release name. Start with the ubuntu/devel release (e.g. `impish`). + * `git commit -m "throwaway"` Do **not** push this commit! + + c. `build-package` + * This script will generate all the package artifacts in the parent directory as `../out`. + + d. `sbuild-it ../out/.dsc` + * If this succeeds move on. If this fails, debug and fix before continuing. + + e. Repeat 3.b through 3.d for all supported Ubuntu Releases + * PS: remember to also change the version number on the changelog. For example, suppose + the new version is `1.1~20.04.1~rc1`. If you want to test Bionic now, change it to + `1.1~18.04.1~rc1`. + + f. For each release, dput to the staging PPA: + * `dput ppa:ua-client/staging ../out/_source.changes` + * After each `dput` wait for the "Accepted" email from Launchpad before moving on. + +### II. Release to Ubuntu (devel and SRU) + +> Note: `impish` is used throughout as a reference to the current devel release. This will change. + +1. Prepare SRU Launchpad bugs. + + a. We do this even before a succesful merge into ubuntu/devel because the context added to these bugs is useful for the Server Team reviewer. + + b. Create a new bug on Launchpad for ubuntu-advantage-tools and use the format defined [here](https://wiki.ubuntu.com/UbuntuAdvantageToolsUpdates#SRU_Template) for the description. + * The title should be in the format `[SRU] ubuntu-advantage-tools (27.1 -> 27.2) Xenial, Bionic, Focal, Hirsute`, substituting version numbers and release names as necessary. + + c. For each Launchpad bug fixed by this release (which should all be referenced in our changelog), add the SRU template to the description and fill out each section. + * Leave the original description in the bug at the bottom under the header `[Original Description]`. + * For the testing steps, include steps to reproduce the bug. Then include instructions for adding `ppa:ua-client/staging`, and steps to verify the bug is no longer present. + +2. Set up the Merge Proposal (MP) for ubuntu/devel + + a. `git-ubuntu clone ubuntu-advantage-tools; cd ubuntu-advantage-tools` + + b. `git remote add upstream git@github.com:canonical/ubuntu-advantage-client.git` + + c. `git fetch upstream` + + d. `git rebase --onto pkg/ubuntu/devel ` + * e.g. `git rebase --onto pkg/ubuntu/devel 27.0.2 27.1` + * You may need to resolve conflicts, but hopefully these will be minimal. + * You'll end up in a detached state + + e. `git checkout -B upload--impish` + * This creates a new local branch name based on your detached branch + + f. Make sure the changelog version contains the release version in the name (For example, `27.1~21.10.1`) + + g. `git push upload--impish` + + h. On Launchpad, create a merge proposal for this version which targets `ubuntu/devel` + * For an example, see the [27.0.2 merge proposal](https://code.launchpad.net/~chad.smith/ubuntu/+source/ubuntu-advantage-tools/+git/ubuntu-advantage-tools/+merge/402459) + * Add 2 review slots for `canonical-server` and `canonical-server-core-reviewers` +3. Set up the MP for past Ubuntu releases based on the ubuntu/devel PR + + a. Create a PR for each target series based off your local `release-${UA_VERSION}-impish` branch: + * If you've followed the instructions precisely so far, you can just run `bash tools/create-lp-release-branches.sh`. + + b. Create merge proposals for each SRU target release @ `https://code.launchpad.net/~/ubuntu/+source/ubuntu-advantage-tools/+git/ubuntu-advantage-tools/`. Make sure each MP targets your `upload-${UA_VERSION}-impish` branch (the branch you are MP-ing into ubuntu/devel). + + c. Add both `canonical-server` and `canonical-server-core-reviewers` as review slots on each MP. + +4. Server Team Review + + a. Ask in ~Server for a review of your MPs. Include a link to the primary MP into ubuntu/devel and mention the other MPs are only changelog MPs for the SRUs into past releases. + + b. If they request changes, create a PR into the release branch on github and ask UAClient team for review. After that is merged, cherry-pick the commit into your `upload--` branch and push to launchpad. You'll also need to rebase the other `upload--` branches and force push them to launchpad. Then notify the Server Team member that you have addressed their requests. + * Some issues may just be filed for addressing in the future if they are not urgent or pertinent to this release. + * Unless the changes are very minor, or only testing related, you should upload a new release candidate version to `ppa:ua-client/staging` as descibed in I.3. + * After the release is finished, any commits that were merged directly into the release branch in this way should be brought back into `main` via a single PR. + + c. Once review is complete and approved, confirm that Ubuntu Server approver will be tagging the PR with the appropriate `upload/` tag so git-ubuntu will import rich commit history. + + d. At this point the Server Team member should **not** upload the version to the devel release. + * If they do, then any changes to the code after this point will require a bump in the patch version of the release. + + e. Ask Ubuntu Server approver if they also have upload rights to the proposed queue. If they do, request that they upload ubuntu-advantage-tools for all releases. If they do not, ask in ~Server channel for a Ubuntu Server team member with upload rights for an upload review of the MP for the proposed queue. + + f. Once upload review is complete and approved, confirm that Ubuntu Server approver will upload ua-tools via dput to the `-proposed` queue. + + g. Check the [-proposed release queue](https://launchpad.net/ubuntu/xenial/+queue?queue_state=1&queue_text=ubuntu-advantage-tools) for presence of ua-tools in unapproved state for each supported release. Note: libera chat #ubuntu-release IRC channel has a bot that reports queued uploads of any package in a message like "Unapproved: ubuntu-advantage-tools .. version". + +5. SRU Review + + a. Once unapproved ua-tools package is listed in the pending queue for each target release, [ping appropriate daily SRU vanguard for review of ua-tools into -proposed](https://wiki.ubuntu.com/StableReleaseUpdates#Publishing)via the libera.chat #ubuntu-release channel + + b. As soon as the SRU vanguard approves the packages, a bot in #ubuntu-release will announce that ubuntu-advantage-tools is accepted into the applicable -proposed pockets, or the [Xenial -proposed release rejection queue](https://launchpad.net/ubuntu/xenial/+queue?queue_state=4&queue_text=ubuntu-advantage-tools) will contain a reason for rejections. Double check the SRU process bug for any actionable review feedback. + + c. Once accepted into `-proposed` by an SRU vanguard [ubuntu-advantage-tools shows up in the pending_sru page](https://people.canonical.com/~ubuntu-archive/pending-sru.html), check `rmadison ubuntu-advantage-tools | grep -proposed` to see if the upload exists in -proposed yet. + + d. Confirm availability in -proposed pocket via + ```bash + cat > setup_proposed.sh <` and saving the output. + + g. After all tests have passed, tarball all of the output files and upload them to the SRU bug with a message that looks like this: + ``` + We have run the full ubuntu-advantage-tools integration test suite against the version in -proposed. The results are attached. All tests passed (or call out specific explained failures). + + You can verify the correct version was used by checking the output of the first test in each file, which prints the version number. + + I am marking the verification done for this SRU. + ``` + Change the tags on the bug from `verification-needed` to `verification-done` (including the verification tags for each release). + + h. For any other related Launchpad bugs that are fixed in this release. Perform the verification steps necessary for those bugs and mark them `verification-done` as needed. This will likely involve following the test steps, but instead of adding the staging PPA, enabling -proposed. + + i. Once all SRU bugs are tagged as `verification*-done`, all SRU-bugs should be listed as green in [the pending_sru page](https://people.canonical.com/~ubuntu-archive/pending-sru.html). + + j. After the pending sru page says that ubuntu-advantage-tools has been in proposed for 7 days, it is now time to ping the [current SRU vanguard](https://wiki.ubuntu.com/StableReleaseUpdates#Publishing) for acceptance of ubuntu-advantage-tools into -updates. + + k. Ping the Ubuntu Server team member who approved the version in step `II.4` to now upload to the devel release. + + l. Check `rmadison ubuntu-advantage-tools` for updated version in devel release + + m. Confirm availability in -updates pocket via `lxc launch ubuntu-daily: dev-i; lxc exec dev-i -- apt update; lxc exec dev-i -- apt-cache policy ubuntu-advantage-tools` + +### III. Github Repository Post-release Update + +1. Ensure the version tag is correct on github. The `version` git tag should point to the commit that was released as that version to ubuntu -updates. If changes were made in response to feedback during the release process, the tag may have to be moved. +2. Bring in any changes that were made to the release branch into `main` via PR (e.g. Changelog edits). + +## Cloud Images Update + +After the release process is finished, CPC must be informed. They will be responsible to update the cloud images using the package from the pockets it was released to (whether it is the `stable` PPA or the`-updates` pocket). diff -Nru ubuntu-advantage-tools-27.8~16.04.1/contributing-docs/references/terminology.md ubuntu-advantage-tools-27.9~16.04.1/contributing-docs/references/terminology.md --- ubuntu-advantage-tools-27.8~16.04.1/contributing-docs/references/terminology.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/contributing-docs/references/terminology.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,13 @@ +# Terminology + +The following vocabulary is used to describe different aspects of the work +Ubuntu Advantage Client performs: + +| Term | Meaning | +| -------- | -------- | +| UA Client | The python command line client represented in this ubuntu-advantage-client repository. It is installed on each Ubuntu machine and is the entry-point to enable any Ubuntu Advantage commercial service on an Ubuntu machine. | +| Contract Server | The backend service exposing a REST API to which UA Client authenticates in order to obtain contract and commercial service information and manage which support services are active on a machine.| +| Entitlement/Service | An Ubuntu Advantage commercial support service such as FIPS, ESM, Livepatch, CIS-Audit to which a contract may be entitled | +| Affordance | Service-specific list of applicable architectures and Ubuntu series on which a service can run | +| Directives | Service-specific configuration values which are applied to a service when enabling that service | +| Obligations | Service-specific policies that must be instrumented for support of a service. Example: `enableByDefault: true` means that any attached machine **MUST** enable a service on attach | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/CONTRIBUTING.md ubuntu-advantage-tools-27.9~16.04.1/CONTRIBUTING.md --- ubuntu-advantage-tools-27.8~16.04.1/CONTRIBUTING.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/CONTRIBUTING.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,484 @@ +# Contributing to Ubuntu Advantage Client + +## Developer Documentation + +### Reference + +* [Terminology](./contributing-docs/references/terminology.md) + +## Architecture +Ubuntu Advantage client, hereafter "UA client", is a python3-based command line +utility. It provides a CLI to attach, detach, enable, +disable and check status of support related services. + +The package `ubuntu-advantage-tools` also provides a C++ APT hook which helps +advertise ESM service and available packages in MOTD and during various apt +commands. + +The `ubuntu-advantage-pro` package delivers auto-attach functionality via init +scripts and systemd services for various cloud platforms. + +By default, Ubuntu machines are deployed in an unattached state. A machine can +get manually or automatically attached to a specific contract by interacting +with the Contract Server REST API. Any change in state of services or machine +attach results in additional interactions with the Contract Server API to +validate such operations. + +### Attaching a machine +Each Ubuntu SSO account holder has access to one or more contracts. To attach +a machine to an Ubuntu Advantage contract: + +* Obtain a contract token from your Ubuntu SSO account at https://ubuntu.com/advantage. +* Run `sudo ua attach ` on the machine + - Ubuntu Pro images for AWS, Azure and GCP perform an auto-attach without tokens +* UA Client reads config from /etc/ubuntu-advantage/uaclient.conf to obtain + the contract_url (default: https://contracts.canonical.com) +* UA Client POSTs to the Contract Server API @ + /api/v1/context/machines/token providing the \ +* The Contract Server responds with a JSON blob containing an unique machine + token, service credentials, affordances, directives and obligations to allow + enabling and disabling Ubuntu Advantage services +* UA client writes the machine token API response to the root-readonly + /var/lib/ubuntu-advantage/private/machine-token.json +* UA client auto-enables any services defined with + `obligations:{enableByDefault: true}` + +#### Attaching with --attach-config +Running `ua attach` with the `--attach-config` may be better suited to certain scenarios. + +When using `--attach-config` the token must be passed in the file rather than on the command line. This is useful in situations where it is preferred to keep the secret token in a file. + +Optionally, the attach config file can be used to override the services that are automatically enabled as a part of the attach process. + +An attach config file looks like this: +```yaml +token: YOUR_TOKEN_HERE # required +enable_services: # optional list of service names to auto-enable + - esm-infra + - esm-apps + - cis +``` + +And can be passed on the cli like this: +```shell +sudo ua attach --attach-config /path/to/file.yaml +``` + +### Enabling a service +Each service controlled by UA client will have a python module in +uaclient/entitlements/\*.py which handles setup and teardown of services when +enabled or disabled. + +If a contract entitles a machine to a service, `root` user can enable the +service with `ua enable `. If a service can be disabled +`ua disable ` will be permitted. + +The goal of the UA client is to remain simple and flexible and let the +contracts backend drive dynamic changes in contract offerings and constraints. +In pursuit of that goal, the UA client obtains most of it's service constraints +from a machine token that it obtains from the Contract Server API. + +The UA Client is simple in that it relies on the machine token on the attached +machine to describe whether a service is applicable for an environment and what +configuration is required to properly enable that service. + +Any interactions with the Contract server API are defined as UAContractClient +class methods in [uaclient/contract.py](uaclient/contract.py). + +### Timer jobs +UA client sets up a systemd timer to run jobs that need to be executed recurrently. +The timer itself ticks every 6 hours on average, and decides which jobs need +to be executed based on their _intervals_. + +Jobs are executed by the timer script if: +- The script has not yet run successfully, or +- Their interval since last successful run is already exceeded. + +There is a random delay applied to the timer, to desynchronize job execution time +on machines spun at the same time, avoiding multiple synchronized calls to the +same service. + +Current jobs being checked and executed are: + +| Job | Description | Interval | +| --- | ----------- | -------- | +| update_messaging | Update MOTD and APT messages | 6 hours | +| update_status | Update UA status | 12 hours | +| metering | (Only when attached to UA services) Pings Canonical servers for contract metering | 4 hours | + +- The `update_messaging` job makes sure that the MOTD and APT messages match the +available/enabled services on the system, showing information about available +packages or security updates. See [MOTD messages](./docs/howtoguides/update_motd_messages.md). +- The `update_status` job makes sure the `ua status` command will have the latest +information even when executed by a non-root user, updating the +`/var/lib/ubuntu-advantage/status.json` file. + +The timer intervals can be changed using the `ua config set` command. +```bash +# Make the update_status job run hourly +$ sudo ua config set update_status_timer=3600 +``` +Setting an interval to zero disables the job. +```bash +# Disable the update_status job +$ sudo ua config set update_status_timer=0 +``` + +## Directory layout +The following describes the intent of UA client related directories: + + +| File/Directory | Intent | +| -------- | -------- | +| ./tools | Helpful scripts used to publish, release or test various aspects of UA client | +| ./features/ | Behave BDD integration tests for UA Client +| ./uaclient/ | collection of python modules which will be packaged into ubuntu-advantage-tools package to deliver the UA Client CLI | +| uaclient.entitlements | Service-specific \*Entitlement class definitions which perform enable, disable, status, and entitlement operations etc. All classes derive from base.py:UAEntitlement and many derive from repo.py:RepoEntitlement | +| ./uaclient/cli.py | The entry-point for the command-line client +| ./uaclient/clouds/ | Cloud-platform detection logic used in Ubuntu Pro to determine if a given should be auto-attached to a contract | +| uaclient.contract | Module for interacting with the Contract Server API | +| ./demo | Various stale developer scripts for setting up one-off demo environments. (Not needed often) +| ./apt-hook/ | the C++ apt-hook delivering MOTD and apt command notifications about UA support services | +| ./apt-conf.d/ | apt config files delivered to /etc/apt/apt-conf.d to automatically allow unattended upgrades of ESM security-related components. If apt proxy settings are configured, an additional apt config file will be placed here to configure the apt proxy. | +| /etc/ubuntu-advantage/uaclient.conf | Configuration file for the UA client.| +| /var/lib/ubuntu-advantage/private | `root` read-only directory containing Contract API responses, machine-tokens and service credentials | +| /var/log/ubuntu-advantage.log | `root` read-only log of ubuntu-advantage operations | + + +## Collecting logs +The `ua collect-logs` command creates a tarball with all relevant data for debugging possible problems with UA. It puts together: +- The UA Client configuration file (the default is `/etc/ubuntu-advantage/uaclient.conf`) +- The UA Client log files (the default is `/var/log/ubuntu-advantage*`) +- The files in `/etc/apt/sources.list.d/*` related to UA +- Output of `systemctl status` for the UA Client related services +- Status of the timer jobs, `canonical-livepatch`, and the systemd timers +- Output of `cloud-id`, `dmesg` and `journalctl` + +Sensitive data is redacted from all files included in the tarball. As of now, the command must be run as root. + +Running the command creates a `ua_logs.tar.gz` file in the current directory. +The output file path/name can be changed using the `-o` option. + +## Testing + +All unit and lint tests are run using `tox`. We also use `tox-pip-version` to specify an older pip version as a workaround: we have some required dependencies that can't meet the strict compatibility checks of current pip versions. + +First, install `tox` and `tox-pip-version` - you'll only have to do this once. + +```shell +make testdeps +``` + +Then you can run the unit and lint tests: + +```shell +tox +``` + +The client also includes built-in dep8 tests. These are run as follows: + +```shell +autopkgtest -U --shell-fail . -- lxd ubuntu:xenial +``` + +### Integration Tests + +ubuntu-advantage-client uses [behave](https://behave.readthedocs.io) +for its integration testing. + +The integration test definitions are stored in the `features/` +directory and consist of two parts: `.feature` files that define the +tests we want to run, and `.py` files which implement the underlying +logic for those tests. + +By default, integration tests will do the folowing on a given cloud platform: + * Launch an instance running latest daily image of the target Ubuntu release + * Add the Ubuntu advantage client daily build PPA: [ppa:ua-client/daily](https://code.launchpad.net/~ua-client/+archive/ubuntu/daily) + * Install the appropriate ubuntu-advantage-tools and ubuntu-advantage-pro deb + * Run the integration tests on that instance. + +The testing can be overridden to run using a local copy of the ubuntu-advantage-client source code instead of the daily PPA by providing the following environment variable to the behave test runner: +```UACLIENT_BEHAVE_BUILD_PR=1``` + +> Note that, by default, we cache the source even when `UACLIENT_BEHAVE_BUILD_PR=1`. This means that if you change the python code locally and want to run the behave tests against your new version, you need to either delete the cache (`rm /tmp/pr_source.tar.gz`) or also set `UACLIENT_BEHAVE_CACHE_SOURCE=0`. + +To run the tests, you can use `tox`: + +```shell +tox -e behave-lxd-20.04 +``` + +or, if you just want to run a specific file, or a test within a file: + +```shell +tox -e behave-lxd-20.04 features/unattached_commands.feature +tox -e behave-lxd-20.04 features/unattached_commands.feature:55 +``` + +As can be seen, this will run behave tests only for release 20.04 (Focal Fossa). We are currently +supporting 5 distinct releases: + +* 22.04 (Jammy Jellyfish) +* 21.10 (Impish Indri) +* 20.04 (Focal Fossa) +* 18.04 (Bionic Beaver) +* 16.04 (Xenial Xerus) + +Therefore, to change which release to run the behave tests against, just change the release version +on the behave command. + +Furthermore, when developing/debugging a new scenario: + + 1. Add a `@wip` tag decorator on the scenario + 2. To only run @wip scenarios run: `tox -e behave-lxd-20.04 -- -w` + 3. If you want to use a debugger: + 1. Add ipdb to integration-requirements.txt + 2. Add ipdb.set_trace() in the code block you wish to debug + +(If you're getting started with behave, we recommend at least reading +through [the behave +tutorial](https://behave.readthedocs.io/en/latest/tutorial.html) to get +an idea of how it works, and how tests are written.) + +#### Iterating Locally + +To make running the tests repeatedly less time-intensive, our behave +testing setup has support for reusing images between runs via two +configuration options (provided in environment variables), +`UACLIENT_BEHAVE_IMAGE_CLEAN` and `UACLIENT_BEHAVE_REUSE_IMAGE`. + +To avoid the test framework cleaning up the image it creates, you can +run it like this: + +```sh +UACLIENT_BEHAVE_IMAGE_CLEAN=0 tox -e behave +``` + +which will emit a line like this above the test summary: + +``` +Image cleanup disabled, not deleting: behave-image-1572443113978755 +``` + +You can then reuse that image by plugging its name into your next test +run, like so: + +```sh +UACLIENT_BEHAVE_REUSE_IMAGE=behave-image-1572443113978755 tox -e behave +``` + +If you've done this correctly, you should see something like +`reuse_image = behave-image-1572443113978755` in the "Config options" +output, and test execution should start immediately (without the usual +image build step). + +(Note that this handling is specific to our behave tests as it's +performed in `features/environment.py`, so don't expect to find +documentation about it outside of this codebase.) + +For development purposes there is `reuse_container` option. +If you would like to run behave tests in an existing container +you need to add `-D reuse_container=container_name`: + +```sh +tox -e behave -D reuse_container=container_name +``` + +#### Optimizing total run time of integration tests with snapshots +When `UACLIENT_BEHAVE_SNAPSHOT_STRATEGY=1` we create a snapshot of an instance +with ubuntu-advantage-tools installed and restore from that snapshot for all tests. +This adds an upfront cost that is amortized across several test scenarios. + +Based on some rough testing in July 2021, these are the situations +when you should set UACLIENT_BEHAVE_SNAPSHOT_STRATEGY=1 + +> At time of writing, starting a lxd.vm instance from a local snapshot takes +> longer than starting a fresh lxd.vm instance and installing ua. + +| machine_type | condition | +| ------------- | ------------------ | +| lxd.container | num_scenarios > 7 | +| lxd.vm | never | +| gcp | num_scenarios > 5 | +| azure | num_scenarios > 14 | +| aws | num_scenarios > 11 | + +#### Integration testing on EC2 +The following tox environments allow for testing focal on EC2: + +``` + # To test ubuntu-pro-images + tox -e behave-awspro-20.04 + # To test Canonical cloud images (non-ubuntu-pro) + tox -e behave-awsgeneric-20.04 +``` + +To run the test for a different release, just update the release version string. For example, +to run AWS pro xenial tests, you can run: + +``` +tox -e behave-awspro-16.04 +``` + +In order to run EC2 tests the following environment variables are required: + - UACLIENT_BEHAVE_AWS_ACCESS_KEY_ID + - UACLIENT_BEHAVE_AWS_SECRET_ACCESS_KEY + + +To specifically run non-ubuntu pro tests using canonical cloud-images an +additional token obtained from https://ubuntu.com/advantage needs to be set: + - UACLIENT_BEHAVE_CONTRACT_TOKEN= + +By default, the public AMIs for Ubuntu Pro testing used for each Ubuntu +release are defined in features/aws-ids.yaml. These ami-ids are determined by +running `./tools/refresh-aws-pro-ids`. + +Integration tests will read features/aws-ids.yaml to determine which default +AMI id to use for each supported Ubuntu release. + +To update `features/aws-ids.yaml`, run `./tools/refresh-aws-pro-ids` and put up +a pull request against this repo to updated that content from the ua-contracts +marketplace definitions. + +* To manually run EC2 integration tests using packages from `ppa:ua-client/daily` provide the following environment vars: + +```sh +UACLIENT_BEHAVE_AWS_ACCESS_KEY_ID= UACLIENT_BEHAVE_AWS_SECRET_KEY= tox -e behave-awspro-20.04 +``` + +* To manually run EC2 integration tests with a specific AMI Id provide the +following environment variable to launch your specfic AMI instead of building +a daily ubuntu-advantage-tools image. +```sh +UACLIENT_BEHAVE_REUSE_IMAGE=your-custom-ami tox -e behave-awspro-20.04 +``` + +#### Integration testing on Azure +The following tox environments allow for testing focal on Azure: + +``` + # To test ubuntu-pro-images + tox -e behave-azurepro-20.04 + # To test Canonical cloud images (non-ubuntu-pro) + tox -e behave-azuregeneric-20.04 +``` + +To run the test for a different release, just update the release version string. For example, +to run Azure pro xenial tests, you can run: + +``` +tox -e behave-azurepro-16.04 +``` + +In order to run Azure tests the following environment variables are required: + - UACLIENT_BEHAVE_AZ_CLIENT_ID + - UACLIENT_BEHAVE_AZ_CLIENT_SECRET + - UACLIENT_BEHAVE_AZ_SUBSCRIPTION_ID + - UACLIENT_BEHAVE_AZ_TENANT_ID + + +To specifically run non-ubuntu pro tests using canonical cloud-images an +additional token obtained from https://ubuntu.com/advantage needs to be set: + - UACLIENT_BEHAVE_CONTRACT_TOKEN= + +* To manually run Azure integration tests using packages from `ppa:ua-client/daily` provide the following environment vars: + +```sh +UACLIENT_BEHAVE_AZ_CLIENT_ID= UACLIENT_BEHAVE_AZ_CLIENT_SECRET= UACLIENT_BEHAVE_AZ_SUBSCRIPTION_ID= UACLIENT_BEHAVE_AZ_TENANT_ID= tox -e behave-azurepro-20.04 +``` + +* To manually run Azure integration tests with a specific Image Id provide the +following environment variable to launch your specfic Image Id instead of building +a daily ubuntu-advantage-tools image. +```sh +UACLIENT_BEHAVE_REUSE_IMAGE=your-custom-image-id tox -e behave-awspro-20.04 +``` + +## Building + +Packages ubuntu-advantage-tools and ubuntu-advantage-pro are created from the +debian/control file in this repository. You can build the +packages the way you would normally build a Debian package: + + +```shell +dpkg-buildpackage -us -uc +``` + +**Note** It will build the packages with dependencies for the Ubuntu release on +which you are building, so it's best to build in a container or kvm for the +release you are targeting. + +OR, if you want to build for a target release other than the release +you're on: + +### using sbuild +[configure sbuild](https://wiki.ubuntu.com/SimpleSbuild) and +use that for the build: + +Setup some chroots for sbuild with this script +```shell +bash ./tools/setup_sbuild.sh +``` + +```shell +debuild -S +sbuild --dist= ../ubuntu-advantage-tools_*.dsc +# emulating different architectures in sbuild-launchpad-chroot +sbuild-launchpad-chroot create --architecture="riscv64" "--name=focal-riscv64" "--series=focal +``` + +> Note: Every so often, it is recommended to update your chroots. +> ```bash +> # to update a single chroot +> sudo sbuild-launchpad-chroot update -n ua-xenial-amd64 +> # this script can be used to update all chroots +> sudo PATTERN=\* sh /usr/share/doc/sbuild/examples/sbuild-debian-developer-setup-update-all +> ``` + +### Setting up an lxc development container +```shell +lxc launch ubuntu-daily:xenial dev-x -c user.user-data="$(cat tools/ua-dev-cloud-config.yaml)" +lxc exec dev-x bash +``` + +### Setting up a kvm development environment with multipass +**Note:** There is a sample procedure documented in tools/multipass.md as well. +```shell +multipass launch daily:focal -n dev-f --cloud-init tools/ua-dev-cloud-config.yaml +multipass connect dev-f +``` + +## Code Formatting + +The `ubuntu-advantage-client` code base is formatted using +[black](https://github.com/psf/black), and imports are sorted with +[isort](https://github.com/PyCQA/isort). When making changes, you +should ensure that your code is blackened and isorted, or it will +be rejected by CI. +Formatting the whole codebase is as simple as running: + +```shell +black uaclient/ +isort uaclient/ +``` + +To make it easier to avoid committing incorrectly formatted code, this +repo includes configuration for [pre-commit](https://pre-commit.com/) +which will stop you from committing any code that isn't blackened. To +install the project's pre-commit hook, install `pre-commit` and run: + +```shell +pre-commit install +``` + +(To install `black` and `pre-commit` at the appropriate versions for +the project, you should install them via `dev-requirements.txt`.) + +## Daily Builds + +On Launchpad, there is a [daily build recipe](https://code.launchpad.net/~canonical-server/+recipe/ua-client-daily), +which will build the client and place it in the [ua-client-daily PPA](https://code.launchpad.net/~ua-client/+archive/ubuntu/daily). + +## Releasing ubuntu-advantage-tools +See [How to release a new version of UA](./contributing-docs/howtoguides/how_to_release_a_new_version_of_ua.md) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/debian/changelog ubuntu-advantage-tools-27.9~16.04.1/debian/changelog --- ubuntu-advantage-tools-27.8~16.04.1/debian/changelog 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/debian/changelog 2022-05-18 19:44:15.000000000 +0000 @@ -1,8 +1,61 @@ -ubuntu-advantage-tools (27.8~16.04.1) xenial; urgency=medium +ubuntu-advantage-tools (27.9~16.04.1) xenial; urgency=medium - * Backport new upstream release: (LP: #1969125) to xenial + * Backport new upstream release: (LP: #1973099) to xenial - -- Lucas Moura Thu, 14 Apr 2022 15:32:30 -0300 + -- Grant Orndorff Wed, 18 May 2022 15:44:15 -0400 + +ubuntu-advantage-tools (27.9~22.10.1) kinetic; urgency=medium + + * d/rules + - remove trusty specific code + - remove ua-license-check.{timer,service,path} + - install ubuntu-advantage.service + - only on xenial: install ubuntu-advantage-cloud-id-shim.service + * d/tools.preinst: remove old config field to avoid warnings in logs + * d/tools.postinst + - remove trusty specific code + - print warnings if /etc/os-release doesn't have required fields + - hardcode service list instead of exec-ing python3 for old migration + - refactor python to avoid instantiating UAConfig extra times + - refactor python to always use messages module for strings + - rm the old marker file that triggered ua-license-check.path + - remove unnecessary deb-systemd-helper check in ua-messaging cleanup + - clean up old ua-license-check state + - run new cloud-id-shim script + * d/tools/postrm + - clean up ubuntu-advantage-daemon log files + * New upstream release 27.9 (LP: #1973099) + - cli: + + for json formatted output, include additional_info for some errors + + new subcommand `ua refresh messages` to update motd and apt messages + - daemon: + + replace ua-license-check timer with ubuntu-advantage.service daemon + + detects on-boot if pro license was added and runs auto-attach + + only runs on gcp and does not continuously long-poll by default for now + - enable: + + fix error message on wrong service name when unattached + - fips: + + allow enabling generic fips kernel on azure by default + + clean up fips reboot message (LP: #1972026) + - fix: + + handle errors during attach process + + fix bug where enable or detach during a fix failed (LP: #1969809) + + fix bug where attempting to fix some CVEs would never finish + - performance: + + remove unnecessary UAConfig object instantiation (also cleans up logs) + + cache "apt-cache policy" output to avoid unnecessary subp calls + - proxy: + + apt_http(s)_proxy renamed to global_apt_http(s)_proxy + + apt_http(s)_proxy config var names will still work + + new ua_apt_http(s)_proxy for only ua-related apt traffic (LP: #1956764) + + global_apt_http(s)_proxy and ua_apt_http(s)_proxy cannot be set at the + same time + - realtime: adjust warning to clarify that a manual revert is possible + - refresh: a normal `ua refresh` will also update motd and apt messages + - security-status: add counts of packages from each archive component + - status: check if contract has updated and notify user to run "ua refresh" + + -- Grant Orndorff Wed, 11 May 2022 13:04:46 -0400 ubuntu-advantage-tools (27.8~22.04.1) jammy; urgency=medium diff -Nru ubuntu-advantage-tools-27.8~16.04.1/debian/rules ubuntu-advantage-tools-27.9~16.04.1/debian/rules --- ubuntu-advantage-tools-27.8~16.04.1/debian/rules 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/debian/rules 2022-05-18 19:44:15.000000000 +0000 @@ -2,7 +2,6 @@ export DH_VERBOSE=1 include /usr/share/dpkg/pkg-info.mk -FLAKE8 := $(shell flake8 --version 2> /dev/null) include /etc/os-release # see https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1840091/comments/3 @@ -13,10 +12,7 @@ # versus Xenial to make those contraints applicable on each series. DISTRO_INFO_DEPS="distro-info (>= 0.18ubuntu0.18.04.1)," -ifeq (${VERSION_ID},"14.04") -APT_PKG_DEPS="apt (>= 1.0.1ubuntu2.23), apt-transport-https (>= 1.0.1ubuntu2.23), apt-utils (>= 1.0.1ubuntu2.23), libapt-inst1.5 (>= 1.0.1ubuntu2.23), libapt-pkg4.12 (>= 1.0.1ubuntu2.23)," -DISTRO_INFO_DEPS="" # Don't set on trusty as we aren't releasing anymore -else ifeq (${VERSION_ID},"16.04") +ifeq (${VERSION_ID},"16.04") APT_PKG_DEPS="apt (>= 1.2.32), apt-transport-https (>= 1.2.32), apt-utils (>= 1.2.32), libapt-inst2.0 (>= 1.2.32), libapt-pkg5.0 (>= 1.2.32)," DISTRO_INFO_DEPS="distro-info (>= 0.14ubuntu0.2)," else ifeq (${VERSION_ID},"18.04") @@ -38,14 +34,8 @@ ifeq (,$(filter nocheck,$(DEB_BUILD_OPTIONS))) make -C apt-hook test python3 -m pytest -ifdef FLAKE8 - # required for Trusty: flake8 does not install a __main__ for -m - # invocation - python3 $(shell which flake8) uaclient -else python3 -m flake8 uaclient endif -endif override_dh_gencontrol: echo extra:Depends=$(APT_PKG_DEPS) $(DISTRO_INFO_DEPS) >> debian/ubuntu-advantage-tools.substvars @@ -56,13 +46,15 @@ dh_systemd_enable -pubuntu-advantage-tools ua-reboot-cmds.service dh_systemd_enable -pubuntu-advantage-tools ua-timer.timer dh_systemd_enable -pubuntu-advantage-tools ua-timer.service - dh_systemd_enable -pubuntu-advantage-tools ua-license-check.timer - dh_systemd_enable -pubuntu-advantage-tools ua-license-check.service - dh_systemd_enable -pubuntu-advantage-tools ua-license-check.path + dh_systemd_enable -pubuntu-advantage-tools ubuntu-advantage.service +ifeq (${VERSION_ID},"16.04") + # Only enable cloud-id-shim on Xenial + dh_systemd_enable -pubuntu-advantage-tools ubuntu-advantage-cloud-id-shim.service +endif override_dh_systemd_start: dh_systemd_start -pubuntu-advantage-tools ua-timer.timer - dh_systemd_start -pubuntu-advantage-tools ua-license-check.path + dh_systemd_start -pubuntu-advantage-tools ubuntu-advantage.service override_dh_auto_install: dh_auto_install --destdir=debian/ubuntu-advantage-tools @@ -71,17 +63,16 @@ # We want to guarantee that we are not shipping any conftest files find $(CURDIR)/debian/ubuntu-advantage-tools -type f -name conftest.py -delete -ifeq (${VERSION_ID},"14.04") - # Move ua-auto-attach.conf out to ubuntu-advantage-pro - mkdir -p debian/ubuntu-advantage-pro/etc/init - mv debian/ubuntu-advantage-tools/etc/init/ua-auto-attach.conf debian/ubuntu-advantage-pro/etc/init/ - rmdir debian/ubuntu-advantage-tools/etc/init -else +ifneq (${VERSION_ID},"16.04") + # Only install cloud-id-shim on Xenial + rm $(CURDIR)/debian/ubuntu-advantage-tools/lib/systemd/system/ubuntu-advantage-cloud-id-shim.service +endif + # Move ua-auto-attach.service out to ubuntu-advantage-pro mkdir -p debian/ubuntu-advantage-pro/lib/systemd/system mv debian/ubuntu-advantage-tools/lib/systemd/system/ua-auto-attach.* debian/ubuntu-advantage-pro/lib/systemd/system cd debian/ubuntu-advantage-tools -endif + override_dh_auto_clean: dh_auto_clean diff -Nru ubuntu-advantage-tools-27.8~16.04.1/debian/ubuntu-advantage-tools.postinst ubuntu-advantage-tools-27.9~16.04.1/debian/ubuntu-advantage-tools.postinst --- ubuntu-advantage-tools-27.8~16.04.1/debian/ubuntu-advantage-tools.postinst 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/debian/ubuntu-advantage-tools.postinst 2022-05-18 19:44:15.000000000 +0000 @@ -2,20 +2,22 @@ set -e -. /etc/os-release # For VERSION_ID - -# Since UBUNTU_CODENAME isn't on trusty set it set a default if unknown -if [ "" = "${UBUNTU_CODENAME}" ]; then - case "$VERSION_ID" in - 14.04) UBUNTU_CODENAME="trusty";; - *) UBUNTU_CODENAME="NO-UBUNTU_CODENAME-$VERSION_ID";; - esac -fi +. /etc/os-release # For VERSION_ID and UBUNTU_CODENAME # Needed even if this script doesn't call debconf, see: # https://lintian.debian.org/tags/postinst-does-not-load-confmodule.html +# Note: this may re-exec the postinst script. . /usr/share/debconf/confmodule +if [ -z "${VERSION_ID}" ]; then + echo "Warning: missing VERSION_ID in /etc/os-release" >&2 + VERSION_ID="NO-VERSION_ID" +fi +if [ -z "${UBUNTU_CODENAME}" ]; then + echo "Warning: missing UBUNTU_CODENAME in /etc/os-release" >&2 + UBUNTU_CODENAME="NO-UBUNTU_CODENAME-$VERSION_ID" +fi + APT_TRUSTED_KEY_DIR="/etc/apt/trusted.gpg.d" UA_KEYRING_DIR="/usr/share/keyrings/" @@ -24,8 +26,6 @@ APT_SRC_DIR="/etc/apt/sources.list.d" APT_PREFERENCES_DIR="/etc/apt/preferences.d" -ESM_APT_SOURCE_FILE_PRECISE="$APT_SRC_DIR/ubuntu-esm-precise.list" -ESM_APT_SOURCE_FILE_TRUSTY="$APT_SRC_DIR/ubuntu-esm-trusty.list" ESM_INFRA_OLD_APT_SOURCE_FILE_TRUSTY="$APT_SRC_DIR/ubuntu-esm-infra-trusty.list" ESM_INFRA_APT_SOURCE_FILE="$APT_SRC_DIR/ubuntu-esm-infra.list" ESM_APPS_APT_SOURCE_FILE="$APT_SRC_DIR/ubuntu-esm-apps.list" @@ -35,6 +35,9 @@ UA_TIMER_NAME="ua-timer.timer" OLD_MESSAGING_TIMER="ua-messaging.timer" OLD_MESSAGING_TIMER_MASKED_LOCATION="/etc/systemd/system/timers.target.wants/$OLD_MESSAGING_TIMER" +OLD_LICENSE_CHECK_PATH="ua-license-check.path" +OLD_LICENSE_CHECK_PATH_MASKED_LOCATION="/etc/systemd/system/multi-user.target.wants/$OLD_LICENSE_CHECK_PATH" +XENIAL_CLOUD_ID_SHIM_UNIT_LOCATION="/etc/systemd/system/multi-user.target.wants/ubuntu-advantage-cloud-id-shim.service" ESM_APT_PREF_FILE_TRUSTY="$APT_PREFERENCES_DIR/ubuntu-esm-trusty" ESM_INFRA_OLD_APT_PREF_FILE_TRUSTY="$APT_PREFERENCES_DIR/ubuntu-esm-infra-trusty" @@ -49,17 +52,16 @@ SYSTEMD_HELPER_ENABLED_WANTS_LINK="/var/lib/systemd/deb-systemd-helper-enabled/multi-user.target.wants/ua-auto-attach.service" REBOOT_CMD_MARKER_FILE="/var/lib/ubuntu-advantage/marker-reboot-cmds-required" -LICENSE_CHECK_MARKER_FILE="/var/lib/ubuntu-advantage/marker-license-check" +OLD_LICENSE_CHECK_MARKER_FILE="/var/lib/ubuntu-advantage/marker-license-check" MACHINE_TOKEN_FILE="/var/lib/ubuntu-advantage/private/machine-token.json" # Rename apt config files for ua services removing ubuntu release names redact_ubuntu_release_from_ua_apt_filenames() { DIR=$1 - UA_SERVICES=$(/usr/bin/python3 -c " -from uaclient.entitlements import valid_services -print(*valid_services(allow_beta=True, all_names=True), sep=' ') -") + # It is okay if this list is outdated, because this function is only used for an old migration. + # Any services that were introduced after this migration was added won't need to be migrated. + UA_SERVICES="cc-eal cis esm-infra esm-apps fips fips-updates livepatch ros ros-updates" for file in "$DIR"/*; do release_name="" @@ -101,15 +103,12 @@ # Check whether this series is under active ESM check_is_active_esm() { release_name=$1 - # Trusty doesn't support --series param - if [ "${release_name}" = "trusty" ]; then + + _DAYS_UNTIL_ESM=$(ubuntu-distro-info --series "${release_name}" -yeol) + if [ "${_DAYS_UNTIL_ESM}" -lt "1" ]; then return 0 - else - _DAYS_UNTIL_ESM=$(ubuntu-distro-info --series "${release_name}" -yeol) - if [ "${_DAYS_UNTIL_ESM}" -lt "1" ]; then - return 0 - fi fi + return 1 } @@ -120,8 +119,8 @@ from uaclient.config import UAConfig from uaclient.entitlements import entitlement_factory try: - ent_cls = entitlement_factory('${service_name}') cfg = UAConfig() + ent_cls = entitlement_factory(cfg=cfg, name='${service_name}') allow_beta = cfg.features.get('allow_beta', False) print(all([ent_cls.is_beta, not allow_beta])) except Exception: @@ -199,15 +198,6 @@ cp $UA_KEYRING_DIR/$apt_key $APT_TRUSTED_KEY_DIR fi - # Migrate trusty legacy source list and preference file names - if [ "14.04" = "$VERSION_ID" ]; then - if [ -e "$ESM_APT_SOURCE_FILE_TRUSTY" ]; then - mv $ESM_APT_SOURCE_FILE_TRUSTY $ESM_INFRA_APT_SOURCE_FILE - fi - if [ -e "$ESM_APT_PREF_FILE_TRUSTY" ]; then - mv "$ESM_APT_PREF_FILE_TRUSTY" "$ESM_INFRA_APT_PREF_FILE" - fi - fi # If preference file doesn't already exist, we aren't attached. # Setup unauthenticated apt source list file and never-pin preference if [ ! -e "${apt_source_file}" ]; then @@ -237,9 +227,7 @@ install_esm_apt_key_and_source "infra" "$UBUNTU_CODENAME" fi if ! check_service_is_beta esm-apps; then - if [ "${UBUNTU_CODENAME}" != "trusty" ]; then - install_esm_apt_key_and_source "apps" "$UBUNTU_CODENAME" - fi + install_esm_apt_key_and_source "apps" "$UBUNTU_CODENAME" fi } @@ -254,11 +242,10 @@ add_notice() { - module=$1 - msg_name=$2 + msg_name=$1 /usr/bin/python3 -c " from uaclient.config import UAConfig -from uaclient.${module} import ${msg_name} +from uaclient.messages import ${msg_name} cfg = UAConfig() cfg.add_notice(label='', description=${msg_name}) " @@ -269,7 +256,7 @@ if [ ! -f "$REBOOT_CMD_MARKER_FILE" ]; then touch $REBOOT_CMD_MARKER_FILE fi - add_notice messages "$msg_name" + add_notice "$msg_name" } patch_status_json_0_1_for_non_root() { @@ -304,20 +291,13 @@ if echo "$cloud_id" | grep -E -q "^(azure|aws)"; then if echo "$fips_installed" | grep -E -q "installed"; then - add_notice status NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD + add_notice NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD fi fi } -enable_periodic_license_check() { - cloud_id=$(cloud-id 2>/dev/null) || cloud_id="" - if echo "$cloud_id" | grep -q "^gce"; then - if check_is_lts "${UBUNTU_CODENAME}"; then - if [ ! -f $MACHINE_TOKEN_FILE ]; then - touch $LICENSE_CHECK_MARKER_FILE - fi - fi - fi +rm_old_license_check_marker() { + rm -f $OLD_LICENSE_CHECK_MARKER_FILE } disable_new_timer_if_old_timer_already_disabled() { @@ -351,14 +331,40 @@ fi } -remove_old_systemd_timers() { +remove_old_systemd_units() { + PREVIOUS_PKG_VER=$1 # These are the commands that are run when the package is purged. # Since we actually want to remove this service from now on # we have replicated that behavior here if [ -L $OLD_MESSAGING_TIMER_MASKED_LOCATION ]; then - if [ -x "/usr/bin/deb-systemd-helper" ]; then - deb-systemd-helper purge ua-messaging.timer > /dev/null || true - deb-systemd-helper unmask ua-messaging.timer > /dev/null || true + deb-systemd-helper purge ua-messaging.timer > /dev/null || true + deb-systemd-helper unmask ua-messaging.timer > /dev/null || true + fi + if [ -L $OLD_LICENSE_CHECK_PATH_MASKED_LOCATION ]; then + if [ -d /run/systemd/system ]; then + # If the old ua-license-check.timer was running during upgrade + # then it will be in a failed state because the files were removed + # The failed state is ephemeral and only needs to be cleared if + # it is there so that the system doesn't say it is degraded. + # If the old timer was not running, then this is a noop. + systemctl --system daemon-reload > /dev/null || true + systemctl reset-failed ua-license-check.timer > /dev/null 2>&1 || true + # In rare race-condition scenarios, the service can also get into + # the same failed state. + systemctl reset-failed ua-license-check.service > /dev/null 2>&1 || true + fi + deb-systemd-helper purge ua-license-check.path > /dev/null || true + deb-systemd-helper unmask ua-license-check.path > /dev/null || true + fi + + # If we're do-release-upgrad-ing to bionic, then clean up the xenial-only + # cloud-id-shim unit + if [ "$VERSION_ID" = "18.04" ]; then + if echo "$PREVIOUS_PKG_VER" | grep -q "16.04"; then + if [ -L $XENIAL_CLOUD_ID_SHIM_UNIT_LOCATION ]; then + deb-systemd-helper purge ubuntu-advantage-cloud-id-shim.service > /dev/null || true + deb-systemd-helper unmask ubuntu-advantage-cloud-id-shim.service > /dev/null || true + fi fi fi } @@ -366,15 +372,6 @@ case "$1" in configure) PREVIOUS_PKG_VER=$2 - # Special case: legacy precise creds allowed for trusty esm - # do-release-upgrade substitutes s/precise/trusty/ in all apt sources. - # So all we need to do is rename the precise sources file to trusty. - # https://github.com/CanonicalLtd/ubuntu-advantage-client/issues/693 - if [ -e "$ESM_APT_SOURCE_FILE_PRECISE" ]; then - mv $ESM_APT_SOURCE_FILE_PRECISE \ - $ESM_INFRA_APT_SOURCE_FILE - fi - # We changed the way we store public files in 19.5 if dpkg --compare-versions "$PREVIOUS_PKG_VER" lt-nl "19.5~"; then # Remove all publicly-readable files @@ -400,7 +397,7 @@ # Repo for FIPS packages changed from old client if [ -f $FIPS_APT_SOURCE_FILE ]; then if grep -q $OLD_CLIENT_FIPS_PPA $FIPS_APT_SOURCE_FILE; then - add_notice messages FIPS_INSTALL_OUT_OF_DATE + add_notice FIPS_INSTALL_OUT_OF_DATE fi fi @@ -439,9 +436,10 @@ fi fi mark_reboot_for_fips_pro - enable_periodic_license_check + rm_old_license_check_marker disable_new_timer_if_old_timer_already_disabled $PREVIOUS_PKG_VER - remove_old_systemd_timers + remove_old_systemd_units $PREVIOUS_PKG_VER + /usr/lib/ubuntu-advantage/cloud-id-shim.sh || true ;; esac diff -Nru ubuntu-advantage-tools-27.8~16.04.1/debian/ubuntu-advantage-tools.postrm ubuntu-advantage-tools-27.9~16.04.1/debian/ubuntu-advantage-tools.postrm --- ubuntu-advantage-tools-27.8~16.04.1/debian/ubuntu-advantage-tools.postrm 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/debian/ubuntu-advantage-tools.postrm 2022-05-18 19:44:15.000000000 +0000 @@ -16,6 +16,7 @@ rm -f /var/log/ubuntu-advantage.log* rm -f /var/log/ubuntu-advantage-timer.log* rm -f /var/log/ubuntu-advantage-license-check.log* + rm -f /var/log/ubuntu-advantage-daemon.log* } remove_gpg_files(){ diff -Nru ubuntu-advantage-tools-27.8~16.04.1/debian/ubuntu-advantage-tools.preinst ubuntu-advantage-tools-27.9~16.04.1/debian/ubuntu-advantage-tools.preinst --- ubuntu-advantage-tools-27.8~16.04.1/debian/ubuntu-advantage-tools.preinst 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/debian/ubuntu-advantage-tools.preinst 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,24 @@ +#!/bin/sh + +set -e + +remove_old_config_fields() { + PREVIOUS_PKG_VER="$1" + if dpkg --compare-versions "$PREVIOUS_PKG_VER" le "27.8"; then + if grep -q "^license_check_log_file:" /etc/ubuntu-advantage/uaclient.conf; then + sed -i '/^license_check_log_file:.*$/d' /etc/ubuntu-advantage/uaclient.conf || true + fi + fi +} + +case "$1" in + install|upgrade) + if [ -n "$2" ]; then + PREVIOUS_PKG_VER=$2 + remove_old_config_fields "$PREVIOUS_PKG_VER" + fi + ;; +esac + +#DEBHELPER# +exit 0 diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/contracts-controller.patch ubuntu-advantage-tools-27.9~16.04.1/demo/contracts-controller.patch --- ubuntu-advantage-tools-27.8~16.04.1/demo/contracts-controller.patch 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/contracts-controller.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,149 +0,0 @@ ---- openapi_server/controllers/default_controller.py 2019-02-04 23:34:10.227503021 +0000 -+++ openapi_server/controllers/default_controller.py.new 2019-02-04 23:34:05.791511018 +0000 -@@ -24,6 +24,58 @@ - from openapi_server.models.user_contracts_response import UserContractsResponse # noqa: E501 - from openapi_server import util - -+import datetime -+import json -+ -+from openapi_server.models.entitlement_apt_repository import EntitlementAptRepository # noqa: E501 -+from openapi_server.models.entitlement_livepatch import EntitlementLivepatch # noqa: E501 -+ -+ -+ -+CREDS_FILE = '/root/entitlement-creds.json' -+with open(CREDS_FILE) as stream: -+ creds = json.loads(stream.read()) -+ -+now = datetime.datetime.utcnow() -+contract_expiry = now + datetime.timedelta(days=100) -+entitlement_expiry = now + datetime.timedelta(days=1) -+entitlement_expiry_str = entitlement_expiry.strftime('%Y-%m-%dT%H:%M:%S.%fZ') -+revoked_date = now + datetime.timedelta(hours=1) -+machine_token_expiry = now + datetime.timedelta(days=5) -+ -+ -+entitlementESM = EntitlementAptRepository( -+ entitled=True, type='esm', affordances=[{'series': ['trusty', 'xenial', 'bionic']}], -+ directives={'serviceURL': 'https://private-ppa.launchpad.net/canonical-server/uaclient-test', 'aptKey': '94E187AD53A59D1847E4880F8A295C4FB8B190B7'}) -+entitlementFIPS = EntitlementAptRepository(entitled=True, type='fips', affordances=[{'series': ['xenial']}], directives={'serviceURL': 'https://private-ppa.launchpad.net/ubuntu-advantage/fips', 'aptKey': 'A166877412DAC26E73CEBF3FF6C280178D13028C'}) -+entitlementFIPSUpdates = EntitlementAptRepository(entitled=True, type='fips-updates', affordances=[{'series': ['xenial']}], directives={'serviceURL': 'https://private-ppa.launchpad.net/ubuntu-advantage/fips-updates', 'aptKey': 'A166877412DAC26E73CEBF3FF6C280178D13028C'}) -+entitlementLivepatch = EntitlementLivepatch(entitled=True, type='livepatch', affordances=[ -+ {'kernelFlavors': ['generic', 'aws', 'gcp', 'azure', 'ibm'], -+ 'series': ['trusty', 'xenial', 'bionic', 'cosmic', 'disco']}]) -+contract1 = ContractInfo( -+ name='blackberry/desktop', -+ id='cid_1', -+ created_at=now, -+ effective_from=now, -+ effective_to=contract_expiry, -+ resource_entitlements={ -+ 'fips': entitlementFIPS, 'esm': entitlementESM, -+ 'fips-updates': entitlementFIPSUpdates, 'livepatch': entitlementLivepatch}) -+ -+ -+ -+machinetokeninfo1 = MachineTokenInfo( -+ created_at = now, -+ expires = machine_token_expiry, -+ machine_id='remote_machine_1', contract_info=contract1) -+machinetokeninfo2 = MachineTokenInfo( # disabled -+ revoked_at = revoked_date, -+ created_at = now, -+ expires = machine_token_expiry, -+ machine_id='remote_machine_1', contract_info=contract1) -+addContractMachineResponse = AddContractMachineResponse(machine_token='sekret1', machine_token_info=machinetokeninfo1) -+account1 = AccountInfo(id='aid_1', name='Blackberry Limited') -+ - - def add_account(new_account_params=None): # noqa: E501 - """add_account -@@ -86,7 +138,7 @@ - """ - if connexion.request.is_json: - add_contract_machine_body = AddContractMachineBody.from_dict(connexion.request.get_json()) # noqa: E501 -- return 'do some magic!' -+ return addContractMachineResponse - - - def add_contract_token(contract, body=None): # noqa: E501 -@@ -131,7 +183,7 @@ - - :rtype: None - """ -- return 'do some magic!' -+ return 'Do some magic!' - - - def find_account(name=None, id=None, admin_user=None, user=None): # noqa: E501 -@@ -150,7 +202,7 @@ - - :rtype: List[AccountInfo] - """ -- return 'do some magic!' -+ return [account1] - - - def find_account_contract(account): # noqa: E501 -@@ -163,7 +215,7 @@ - - :rtype: List[AccountContractInfo] - """ -- return 'do some magic!' -+ return [AccountContractInfo(account_info=account1, contract_info=contract1)] - - - def find_account_id(account): # noqa: E501 -@@ -189,7 +241,7 @@ - - :rtype: List[AccountUserAccess] - """ -- return 'do some magic!' -+ return [AccountUserAccess(user_id=42, user_access='delegated')] - - - def find_account_user_access_id(account, user): # noqa: E501 -@@ -230,7 +282,7 @@ - - :rtype: List[MachineTokenInfo] - """ -- return 'do some magic!' -+ return [machinetokeninfo1] - - - def find_contract_token(contract, user): # noqa: E501 -@@ -348,7 +400,23 @@ - - :rtype: GetResourceMachineAccessResponse - """ -- return 'do some magic!' -+ responses = { -+ 'fips': GetResourceMachineAccessResponse( -+ entitlement=entitlementFIPS, -+ resource_token=creds['fips']), -+ 'fips-updates': GetResourceMachineAccessResponse( -+ entitlement=entitlementFIPSUpdates, -+ resource_token=creds['fips-updates']), -+ 'esm': GetResourceMachineAccessResponse( -+ entitlement=entitlementESM, -+ resource_token=creds['esm']), -+ 'livepatch': GetResourceMachineAccessResponse( -+ entitlement=entitlementLivepatch, -+ resource_token=creds['livepatch'])} -+ -+ if resource in responses: -+ return responses[resource], 200, {'Expires': entitlement_expiry_str} -+ return 'invalid resource requested %s' % resource - - - def get_user_accounts(): # noqa: E501 -@@ -411,7 +479,7 @@ - - :rtype: MachineTokenInfo - """ -- return 'do some magic!' -+ return machinetokeninfo2 - - - def revoke_contract_token_id(contract, user, token): # noqa: E501 diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/contracts-implemented-controller.patch ubuntu-advantage-tools-27.9~16.04.1/demo/contracts-implemented-controller.patch --- ubuntu-advantage-tools-27.8~16.04.1/demo/contracts-implemented-controller.patch 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/contracts-implemented-controller.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,132 +0,0 @@ ---- openapi_server/controllers/default_controller.py.orig 2019-02-08 17:08:43.611137024 +0000 -+++ openapi_server/controllers/default_controller.py 2019-02-08 20:47:58.262713336 +0000 -@@ -28,6 +28,62 @@ - add_contract_body = AddContractBody.from_dict(connexion.request.get_json()) # noqa: E501 - return 'do some magic!' - -+import datetime -+import json -+ -+from openapi_server.models.account_contract_info import AccountContractInfo # noqa: E501 -+from openapi_server.models.account_info import AccountInfo # noqa: E501 -+from openapi_server.models.contract_info import ContractInfo # noqa: E501 -+from openapi_server.models.contract_token_info import ContractTokenInfo # noqa: E501 -+from openapi_server.models.entitlement_apt_repository import EntitlementAptRepository # noqa: E501 -+from openapi_server.models.entitlement_livepatch import EntitlementLivepatch # noqa: E501 -+from openapi_server.models.machine_token_info import MachineTokenInfo # noqa: E501 -+ -+ -+ -+CREDS_FILE = '/root/entitlement-creds.json' -+with open(CREDS_FILE) as stream: -+ creds = json.loads(stream.read()) -+ -+now = datetime.datetime.utcnow() -+contract_expiry = now + datetime.timedelta(days=100) -+entitlement_expiry = now + datetime.timedelta(days=1) -+entitlement_expiry_str = entitlement_expiry.strftime('%Y-%m-%dT%H:%M:%S.%fZ') -+machine_token_expiry = now + datetime.timedelta(days=5) -+ -+ -+entitlementESM = EntitlementAptRepository( -+ entitled=True, type='esm', affordances=[{'series': ['trusty', 'xenial', 'bionic']}], -+ directives={'serviceURL': 'https://private-ppa.launchpad.net/canonical-server/uaclient-test', 'aptKey': '94E187AD53A59D1847E4880F8A295C4FB8B190B7'}) -+entitlementFIPS = EntitlementAptRepository(entitled=True, type='fips', affordances=[{'series': ['xenial']}], directives={'serviceURL': 'https://private-ppa.launchpad.net/ubuntu-advantage/fips', 'aptKey': 'A166877412DAC26E73CEBF3FF6C280178D13028C'}) -+entitlementFIPSUpdates = EntitlementAptRepository(entitled=True, type='fips-updates', affordances=[{'series': ['xenial']}], directives={'serviceURL': 'https://private-ppa.launchpad.net/ubuntu-advantage/fips-updates', 'aptKey': 'A166877412DAC26E73CEBF3FF6C280178D13028C'}) -+entitlementLivepatch = EntitlementLivepatch(entitled=True, type='livepatch', affordances=[ -+ {'kernelFlavors': ['generic', 'aws', 'gcp', 'azure', 'ibm'], -+ 'series': ['trusty', 'xenial', 'bionic', 'cosmic', 'disco']}]) -+contract1 = ContractInfo( -+ name='blackberry/desktop', -+ id='cid_1', -+ created_at=now, -+ effective_from=now, -+ effective_to=contract_expiry, -+ resource_entitlements={ -+ 'fips': entitlementFIPS, 'esm': entitlementESM, -+ 'fips-updates': entitlementFIPSUpdates, 'livepatch': entitlementLivepatch}) -+ -+ -+contracttokeninfo = ContractTokenInfo( -+ contract_info=contract1, expires=machine_token_expiry) -+addContractTokenResponse = AddContractTokenResponse(contract_token='contract_sekret1', contract_token_info=contracttokeninfo) -+ -+machinetokeninfo1 = MachineTokenInfo( -+ expires = machine_token_expiry, -+ machine_id='remote_machine_1', contract_info=contract1) -+machinetokeninfo2 = MachineTokenInfo( # disabled -+ expires = machine_token_expiry, -+ machine_id='remote_machine_1', contract_info=contract1) -+addContractMachineResponse = AddContractMachineResponse(machine_token='sekret1', machine_token_info=machinetokeninfo1) -+account1 = AccountInfo(id='aid_1', name='Blackberry Limited') -+ - - def add_contract_machine(add_contract_machine_body=None): # noqa: E501 - """add_contract_machine -@@ -41,7 +97,7 @@ - """ - if connexion.request.is_json: - add_contract_machine_body = AddContractMachineBody.from_dict(connexion.request.get_json()) # noqa: E501 -- return 'do some magic!' -+ return addContractMachineResponse - - - def add_contract_token(contract, body=None): # noqa: E501 -@@ -56,7 +112,7 @@ - - :rtype: AddContractTokenResponse - """ -- return 'do some magic!' -+ return addContractTokenResponse - - - def get_account_contracts(account): # noqa: E501 -@@ -69,7 +125,7 @@ - - :rtype: GetContractsResponse - """ -- return 'do some magic!' -+ return GetContractsResponse(contracts=[AccountContractInfo(account_info=account1, contract_info=contract1)]) - - - def get_accounts(): # noqa: E501 -@@ -80,7 +136,7 @@ - - :rtype: GetAccountsResponse - """ -- return 'do some magic!' -+ return GetAccountsResponse(accounts=[account1]) - - - def get_canonical_sso_macaroon(): # noqa: E501 -@@ -91,7 +147,7 @@ - - :rtype: GetCanonicalSSOMacaroonResponse - """ -- return 'do some magic!' -+ return GetCanonicalSSOMacaroonResponse(macaroon='MDAwZWxvY2F0aW9uIAowMDQ3aWRlbnRpZmllciBBd29RNEdTQkI5RzRJYmFjeGh0eUZudU5hUklCTUJvUkNnaHpjMjlzYjJkcGJoSUZiRzluYVc0CjAwMzFjaWQgdGltZS1iZWZvcmUgMjAxOS0wMi0xNVQxNzoyODowNC4xMjAxMDg5WgowMTdhY2lkIHsic2VjcmV0Ijoic2ZMc0Y4c1VSUDJBV3Q4NnJva0g0M1NWQXdjcUo2aE1QdTExY3lnQUswR1dDamN4N0REYktKUWFMRSsxUnpSSXlLSWUxS05kVXpGb0ZoNXZNY1FHRnpKVXNaRlZjUlBsMkFCUjgydU9FVnZXK1FNMGJuNWxHVjZ1UWhiUGFLdDdINmd3OGxMV3JreGw0N0FZOVd3THNhR244MFRNdVB1Ym9zc3kweGNsTVVmVS9lR2ZNTTZuQ1JIUzZnMDJ6cUI1YXJkWGFqQTZHQU5NemRpRDhUU2J1dnNlaVhINExVYUFyRTlKNVdSck5sVndYazkrcTZwMzZNUUFsUlg0N1FxZWZuOGpkWkkxTnFENWxHaUh0SzV2TFN2Rk1wMkk0L29zZ01ObHBQWFk1VENTRkJHM2dNU09SM3dpNzVPREZ3N1RUQ2E4cjFkRGcyV1BtWjU3VjZWR3hRPT0iLCJ2ZXJzaW9uIjoxfQowMDUxdmlkIA48kKeLV6UH_kWeflAt6HpgAjI9MEIXOZi4tEMeHYHMWpxGddwpfsgIobKnhJHFoEx00unMqrFHFmaNM4mTyHZ-ATJ6EcjGigowMDE4Y2wgbG9naW4udWJ1bnR1LmNvbQowMDJmc2lnbmF0dXJlINKtgxrlCulP1BT3sUKITGSIp5yRm64fhhe6Iv8iGq-9Cg') - - - def get_resource_machine_access(resource, machine): # noqa: E501 -@@ -106,4 +162,20 @@ - - :rtype: GetResourceMachineAccessResponse - """ -- return 'do some magic!' -+ responses = { -+ 'fips': GetResourceMachineAccessResponse( -+ entitlement=entitlementFIPS, -+ resource_token=creds['fips']), -+ 'fips-updates': GetResourceMachineAccessResponse( -+ entitlement=entitlementFIPSUpdates, -+ resource_token=creds['fips-updates']), -+ 'esm': GetResourceMachineAccessResponse( -+ entitlement=entitlementESM, -+ resource_token=creds['esm']), -+ 'livepatch': GetResourceMachineAccessResponse( -+ entitlement=entitlementLivepatch, -+ resource_token=creds['livepatch'])} -+ -+ if resource in responses: -+ return responses[resource], 200, {'Expires': entitlement_expiry_str} -+ return 'invalid resource requested %s' % resource diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/demo-contract-service ubuntu-advantage-tools-27.9~16.04.1/demo/demo-contract-service --- ubuntu-advantage-tools-27.8~16.04.1/demo/demo-contract-service 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/demo-contract-service 1970-01-01 00:00:00.000000000 +0000 @@ -1,112 +0,0 @@ -#!/bin/bash - -# Simple script used to setup and patch up ua-service contract backend api with -# sample data in an lxc. - -# This script will be killed once implementation of Contract Service API is -# functional in the repo https://github.com/CanonicalLtd/ua-service - -# 01/07/2019: ATM only an OpenAPI spec is available for the contract service. -upstream=${1:-CanonicalLtd} -server_type=${2:-docker} -if [ "$upstream" != "canonical-server" -a "$upstream" != "CanonicalLtd" ]; then - echo "Invalid upstream value $upstream, expected canonical-server or CanonicalLtd" - exit 1 -fi -NETRC=~/.netrc -LXC_NAME=contract-demo-bionic -if [ ! -d ua-contracts ]; then - git clone git@github.com:$upstream/ua-contracts.git -fi - -CREDS_FILE="./demo/entitlement-creds.json" -echo -n "Enter your LaunchpadID: " -read LP_ID -USERCREDS_FILE="$CREDS_FILE.$LP_ID" -if [ ! -f $USERCREDS_FILE ]; then - echo -n "Configuring local $CREDS_FILE to seed demo contract service" - - echo "Find PPA credentials (user:passwd) by clicking the 'View' links next to the named PPA at: -https://launchpad.net/~$LP_ID/+archivesubscriptions/" - - echo -n "Enter your CIS Security Benchmarks (ppa:ubuntu-advantage/security-benchmarks) (user.name:key): " - read CIS_TOKEN - echo -n "Enter your ESM Staging creds (user.name:key): " - read ESM_TOKEN - echo -n "Enter your FIPS ppa creds (user.name:key): " - read FIPS_TOKEN - echo -n "Enter your FIPS Updates ppa creds (user.name:key): " - read FIPS_UPDATES_TOKEN - echo -n "Enter your Livepatch token from https://auth.livepatch.canonical.com/: " - read LIVEPATCH_TOKEN - - sed "s/%LIVEPATCH_CRED%/${LIVEPATCH_TOKEN}/; s/%FIPS_CRED%/$FIPS_TOKEN/; s/%FIPS_UPDATES_CRED%/$FIPS_UPDATES_TOKEN/; s/%ESM_CRED%/$ESM_TOKEN/; s/%CIS_CRED%/$CIS_TOKEN/" $CREDS_FILE > $USERCREDS_FILE - cat > ${CREDS_FILE/json/sh} <> .bashrc' - lxc exec $LXC_NAME -- sh -c 'echo export PATH=\$GOPATH/bin:/usr/local/go/bin:\$PATH >> .bashrc' - echo -e "Running demo contract server API with:\nlxc exec $LXC_NAME /root/runserver.sh" - lxc exec $LXC_NAME /root/runserver.sh -fi -VM_IP=`lxc list -c n4 $LXC_NAME | grep eth0 | awk '{print $3}'` -CONTRACT_URL="http:\/\/$VM_IP:3000" - -echo "Changing uaclient-devel.conf to point to your lxc @ $CONTRACT_URL" -sed -i "s/contract_url.*/contract_url: '$CONTRACT_URL'/" uaclient-devel.conf -# Rename devel config to $LXC_NAME/etc/ubuntu-advantage/uaclient.conf -lxc file push uaclient-devel.conf $LXC_NAME/etc/ubuntu-advantage/uaclient.conf - -echo "To enable bootstrapped admin user to change contract details..." -echo curl -X PUT -u \"admin:password1234\" ${CONTRACT_URL//\\/}/acl/product -H \"Content-Type: application/json\" -d \'{\"users\": [\"admin\"]}\' -echo -echo "To read free product:" -echo curl -u \"admin:password1234\" ${CONTRACT_URL//\\/}/v1/products/free -echo -echo "To manipulate free contract, change values in free-contract.json and run the folowing:" -echo curl -X POST -u \"admin:password1234\" ${CONTRACT_URL//\\/}/v1/products/free -H \"Content-Type: application/json\" -d @free-contract.json - diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/entitlement-creds.json ubuntu-advantage-tools-27.9~16.04.1/demo/entitlement-creds.json --- ubuntu-advantage-tools-27.8~16.04.1/demo/entitlement-creds.json 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/entitlement-creds.json 1970-01-01 00:00:00.000000000 +0000 @@ -1,2 +0,0 @@ -{"esm": "%ESM_CRED%", "fips": "%FIPS_CRED%", "fips-updates": "%FIPS_UPDATES_CRED%", "livepatch": "%LIVEPATCH_CRED%"} - diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/README.md ubuntu-advantage-tools-27.9~16.04.1/demo/README.md --- ubuntu-advantage-tools-27.8~16.04.1/demo/README.md 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/README.md 1970-01-01 00:00:00.000000000 +0000 @@ -1,15 +0,0 @@ -# Demo - -The 'demo' directory is reserved to developer environment test tools -and scripts. Nothing in this directory will be shipped as part of the -packaging. - -## Files - -- contract*patch: patch files applied to the generated demo Contract API service to create a sample API server: To be dropped when Contract Service API is functional -- demo-contract-service: script which will launch a new bionic lxc and install a https://github.com/CanonicalLtd/ua-service API backend with real PPA/livepatch credentials and sample response data -- entitlement-creds.json: Template file containing placeholders for esm, fips, fips-updates and livepatch credentials used in seeding ua-service API responses -- install-contract-server: script to be run within a newly launched lxc to patch and generate a ua-service openapi server with sample data -- run-uaclient: TODO -- runserver.sh: TODO -- uaclient: TODO diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/runserver.sh ubuntu-advantage-tools-27.9~16.04.1/demo/runserver.sh --- ubuntu-advantage-tools-27.8~16.04.1/demo/runserver.sh 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/runserver.sh 1970-01-01 00:00:00.000000000 +0000 @@ -1,5 +0,0 @@ -#!/bin/bash -export GOPATH=/root/go -export PATH=$GOPATH/bin:/usr/local/go/bin:$PATH -cd /root/ua-contracts -make demo diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/run-uaclient ubuntu-advantage-tools-27.9~16.04.1/demo/run-uaclient --- ubuntu-advantage-tools-27.8~16.04.1/demo/run-uaclient 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/run-uaclient 1970-01-01 00:00:00.000000000 +0000 @@ -1,118 +0,0 @@ -#!/usr/bin/env python - -"""Use multipass or lxc to setup uaclients running ubuntu-advantage-client""" - -import argparse -import glob -import os -import re -import sys - -MACAROON_DEPS = { - 'trusty': ['python3-libnacl', 'libsodium18'], - 'xenial': ['python3-libnacl', 'libsodium18'], - 'bionic': ['python3-libnacl', 'libsodium23'], - 'disco': ['python3-libnacl', 'libsodium23']} - - -try: - from uaclient import util -except ImportError: - # Add out cwd to path for dev tools - _tdir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) - sys.path.insert(0, _tdir) - from uaclient import util - -PROG='run-uaclient' - - -def get_parser(): - """Build an arg parser for run-uaclient utility.""" - parser = argparse.ArgumentParser( - prog=PROG, - description='Create an lxc or kvm runing uaclient') - parser.add_argument( - '--backend', '-b', required=False, default='lxc', - choices=['multipass', 'lxc'], - help=('The backend to talk to when creating a vm: multipass or lxc. ' - 'Default:"lxc"')) - parser.add_argument( - '--series', '-s', required=False, default='bionic', - help='The Ubuntu series to deploy in the vm. Default: bionic') - parser.add_argument( - '--name', '-n', - help='The name of the vm to create. Default: uaclient-') - return parser - - -def get_vm_name(backend, desired_name=None): - """Find the appropriate unique vm name which doesn't aleady exist.""" - vm_list, _err = util.subp([backend, 'list']) - if desired_name not in vm_list: - return desired_name - match = re.match(r'[^\d]+(\d+)', desired_name) - if not match: - base_id = 1 - desired_basename = desired_name - else: - base_id = match[0] - desired_basename = desired_name.replace(base_id, '') - while desired_name in vm_list: - base_id = int(base_id) + 1 - desired_name = '%s%d' % (desired_basename, base_id) - return desired_name - - -def create_uaclient_vm(backend, series, name=None): - """Create a uaclient named uaclient vm if absent. - - @param backend: multipass or lxc - @param series: Ubuntu series to deploy - @param name: Name of the vm - - """ - cmd = [] - if not name: - name = 'uaclient-%s' % series - name = get_vm_name(backend, name) - if series == 'trusty': - debs = glob.glob('./ubuntu-advantage-tools*14.04.1_all.deb') - else: - debs = glob.glob('./ubuntu-advantage-tools*bddeb_all.deb') - if not debs: - raise RuntimeError( - 'Found no ubuntu-advantage-debs in ./,' - ' try make deb and make deb-trusty') - deb = os.path.basename(debs[0]) - if backend == 'multipass': - util.subp(['multipass', 'launch', 'daily:%s' % series, '-n', name]) - util.subp(['multipass', 'copy-files', './%s' % deb, '%s:.' % name]) - util.subp(['multipass', 'exec', name, '--', 'sudo', 'apt-get', - 'install'] + MACAROON_DEPS[series]) - util.subp(['multipass', 'exec', name, '--', 'sudo', 'dpkg', '-i', deb]) - util.subp(['multipass', 'copy-files', 'uaclient-devel.conf', '%s:.' % name]) - util.subp(['multipass', 'exec', name, '--', 'sudo', 'mv', - './uaclient-devel.conf', - '/etc/ubuntu-advantage/uaclient.conf']) - print('Access demo uaclient with:\nmultipass exec %s -- bash -l' % name) - elif backend == 'lxc': - util.subp(['lxc', 'launch', 'ubuntu-daily:%s' % series, name]) - util.subp(['lxc', 'file', 'push', '%s' % deb, '%s/root/' % name]) - util.subp(['lxc', 'exec', name, '--', 'sudo', 'dpkg', '-i', - '/root/%s' % deb]) - util.subp(['lxc', 'file', 'push', 'uaclient-devel.conf', - '%s/etc/ubuntu-advantage/uaclient.conf' % name]) - print('Access demo uaclient with:\nlxc exec %s ua status' % name) - else: - raise ValueError("Invalid backend %s. Not multipass|lxc" % backend) - - -def main(): - """Tool to collect and tar all related logs.""" - parser = get_parser() - args = parser.parse_args() - create_uaclient_vm(args.backend, args.series, args.name) - - -if __name__ == '__main__': - sys.exit(main()) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/demo/uaclient ubuntu-advantage-tools-27.9~16.04.1/demo/uaclient --- ubuntu-advantage-tools-27.8~16.04.1/demo/uaclient 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/demo/uaclient 1970-01-01 00:00:00.000000000 +0000 @@ -1,2 +0,0 @@ -# rc file to setup an alias for uaclient testing -alias ua='sudo UA_CONFIG_FILE=uaclient-devel.conf python -m uaclient.cli' diff -Nru ubuntu-advantage-tools-27.8~16.04.1/dev-requirements.txt ubuntu-advantage-tools-27.9~16.04.1/dev-requirements.txt --- ubuntu-advantage-tools-27.8~16.04.1/dev-requirements.txt 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/dev-requirements.txt 2022-05-18 19:44:15.000000000 +0000 @@ -1,5 +1,5 @@ # The black and isort versions are also in .pre-commit-config.yaml; make sure # to update both together -black==19.3b0 +black==22.3.0 isort==5.8.0 pre-commit diff -Nru ubuntu-advantage-tools-27.8~16.04.1/docs/explanations/what_is_the_daemon.md ubuntu-advantage-tools-27.9~16.04.1/docs/explanations/what_is_the_daemon.md --- ubuntu-advantage-tools-27.8~16.04.1/docs/explanations/what_is_the_daemon.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/docs/explanations/what_is_the_daemon.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,10 @@ +# What is the Pro Upgrade Daemon? + +UA client sets up a daemon on supported platforms (currently GCP only) to detect if an Ubuntu Pro license is purchased for the machine. If a Pro license is detected, then the machine is automatically attached. + +If you are uninterested in UA services, you can safely stop and disable the daemon using systemctl: + +``` +sudo systemctl stop ubuntu-advantage.service +sudo systemctl disable ubuntu-advantage.service +``` diff -Nru ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/configure_proxies.md ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/configure_proxies.md --- ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/configure_proxies.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/configure_proxies.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,17 @@ +# How to configure proxies + +The UA Client can be configured to use an http/https proxy as needed for network requests. + +In addition, the UA Client will automatically set up proxies for all programs required for enabling Ubuntu Advantage services. This includes APT, Snaps, and Livepatch. + +Proxies can be set using the `ua config set` command. + +HTTP/HTTPS proxies are set using the fields `http_proxy` and `https_proxy`, respectively. The values for these fields will also be used for Snap and Livepatch proxies. + +APT proxies are defined separately. You can set global apt proxies that affect the whole system using the fields `apt_http_proxy` and `apt_https_proxy`. + +> Starting in to-be-released Version 27.9, APT proxies config options will change. You will be able to set global apt proxies that affect the whole system using the fields `global_apt_http_proxy` and `global_apt_https_proxy`. Alternatively, you could set apt proxies only for UA related services with the fields `ua_apt_http_proxy` and `ua_apt_https_proxy`. + +The format for the proxy configuration values is: + +`://[:@]:` diff -Nru ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/create_pro_golden_image.md ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/create_pro_golden_image.md --- ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/create_pro_golden_image.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/create_pro_golden_image.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,19 @@ +# Remastering custom golden images based on Ubuntu PRO + +Vendors who wish to provide custom images based on Ubuntu PRO images can +follow the procedure below: + +* Launch the Ubuntu PRO golden image +* Customize your golden image as you see fit +* If `ua status` shows attached, remove the UA artifacts to allow clean + auto-attach on subsequent cloned VM launches +```bash +sudo ua detach +sudo rm -rf /var/log/ubuntu-advantage.log # to remove credentials and tokens from logs +``` +* Remove `cloud-init` first boot artifacts so the cloned VM boot is seen as a first boot +```bash +sudo cloud-init clean --logs +sudo shutdown -h now +``` +* Use your cloud platform to clone or snapshot this VM as a golden image diff -Nru ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/enable_ua_in_dockerfile.md ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/enable_ua_in_dockerfile.md --- ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/enable_ua_in_dockerfile.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/enable_ua_in_dockerfile.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,129 @@ +# How to Enable Ubuntu Advantage Services in a Dockerfile + +> Requires UA Client version 27.7 + +Ubuntu Advantage (UA) comes with several services, some of which can be useful in docker. For example, Extended Security Maintenance of packages and FIPS certified packages may be desirable in a docker image. In this how-to-guide, we show how you can use the `ua` tool to take advantage of these services in your Dockerfile. + + +## Step 1: Create a UA Attach Config file + +> Warning: the UA Attach Config file will contain your UA Contract token and should be treated as a secret file. + +An attach config file for ua is a yaml file that specifies some options when running `ua attach`. The file has two fields, `token` and `enable_services` and looks something like this: + +```yaml +token: TOKEN +enable_services: + - service1 + - service2 + - service3 +``` + +The `token` field is required and must be set to your UA token that you can get from signing into [ubuntu.com/advantage](https://ubuntu.com/advantage). + +The `enable_services` field value is a list of UA service names. When it is set, then the services specified will be automatically enabled after attaching with your UA token. + +Service names that you may be interested in enabling in your docker builds include: +- `esm-infra` +- `esm-apps` +- `fips` +- `fips-updates` + +You can find out more about these services by running `ua help service-name` on any Ubuntu machine. + + +## Step 2: Create a Dockerfile to use `ua` and your attach config file + +Your Dockerfile is going to look something like this. + +There are comments inline explaining each line. + +```dockerfile +# Base off of the LTS of your choice +FROM ubuntu:focal + +# We mount a BuildKit secret here to access the attach config file which should +# be kept separate from the Dockerfile and managed in a secure fashion since it +# needs to contain your UA token. +# In the next step, we demonstrate how to pass the file as a secret when +# running docker build. +RUN --mount=type=secret,id=ua-attach-config \ + + # First we update apt so we install the correct versions of packages in + # the next step + apt-get update \ + + # Here we install `ua` (ubuntu-advantage-tools) as well as ca-certificates, + # which is required to talk to the UA authentication server securely. + && apt-get install --no-install-recommends -y ubuntu-advantage-tools ca-certificates \ + + # With ua installed, we attach using our attach config file from the + # previous step + && ua attach --attach-config /run/secrets/ua-attach-config \ + + ########################################################################### + # At this point, the container has access to all UA services specified in + # the attach config file. + ########################################################################### + + # Always upgrade all packages to the latest available version with the UA services + # enabled. + && apt-get upgrade -y \ + + # Then, you can install any specific packages you need for your docker + # container. + # Install them here, while UA is enabled, so that you get the appropriate + # versions. + # Any `apt-get install ...` commands you have in an existing Dockerfile + # that you may be migrating to use UA should probably be moved here. + && apt-get install -y openssl \ + + ########################################################################### + # Now that we've upgraded and installed any packages from the UA services, + # we can clean up. + ########################################################################### + + # This purges ubuntu-advantage-tools, including all UA related secrets from + # the system. + ########################################################################### + # IMPORTANT: As written here, this command assumes your container does not + # need ca-certificates so it is purged as well. + # If your container needs ca-certificates, then do not purge it from the + # system here. + ########################################################################### + && apt-get purge --auto-remove -y ubuntu-advantage-tools ca-certificates \ + + # Finally, we clean up the apt lists which shouldn't be needed anymore + # because any `apt-get install`s should've happened above. Cleaning these + # lists keeps your image smaller. + && rm -rf /var/lib/apt/lists/* + + +# Now, with all of your ubuntu apt packages installed, including all those +# from UA services, you can continue the rest of your app-specific Dockerfile. +``` + +An important point to note about the above Dockerfile is that all of the `apt` and `ua` commands happen inside of one Dockerfile `RUN` instruction. This is critical and must not be changed. Keeping everything as written inside of one `RUN` instruction has two key benefits: + +1. Prevents any UA Subscription-related tokens and secrets from being leaked in an image layer +2. Keeps the image as small as possible by cleaning up extra packages and files before the layer is finished. + +> Note: These benefits could also be attained by squashing the image. + +## Step 3: Build the Docker image + + +Now, with our attach config file and Dockerfile created, we can build the image with a command like the following + +```bash +DOCKER_BUILDKIT=1 docker build . --secret id=ua-attach-config,src=ua-attach-config.yaml -t ubuntu-focal-ua +``` + +There are two important pieces of this command. + +1. We enable BuildKit with `DOCKER_BUILDKIT=1`. This is necessary to support the secret mount feature. +2. We use the secret mount feature of BuildKit with `--secret id=ua-attach-config,src=ua-attach-config.yaml`. This is what passes our attach config file in to be securely used by the `RUN --mount=type=secret,id=ua-attach-config` command in the Dockerfile. + +## Success + +Congratulations! At this point, you should have a docker image that has been built with UA packages installed from whichever UA service you required. diff -Nru ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/update_motd_messages.md ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/update_motd_messages.md --- ubuntu-advantage-tools-27.8~16.04.1/docs/howtoguides/update_motd_messages.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/docs/howtoguides/update_motd_messages.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,12 @@ +# How to update MOTD and APT messages + +Since ubuntu-advantage-tools is responsible for enabling ESM services, we advertise them on different +applications thorough the system, such as MOTD and apt commands like upgrade. + +To verify that APT and MOTD message is advertising the ESM packages, ensure that we have ESM +source list files in the system. If that is the case, please run the following command to +update the state of MOTD and APT messages: + +```sh +ua refresh messages +``` diff -Nru ubuntu-advantage-tools-27.8~16.04.1/docs/references/support_matrix.md ubuntu-advantage-tools-27.9~16.04.1/docs/references/support_matrix.md --- ubuntu-advantage-tools-27.8~16.04.1/docs/references/support_matrix.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/docs/references/support_matrix.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,19 @@ +# Support Matrix for the client + +Ubuntu Advantage services are only available on Ubuntu Long Term Support (LTS) releases. + +On interim Ubuntu releases, `ua status` will report most of the services as 'n/a' and disallow enabling those services. + +Below is a list of platforms and releases ubuntu-advantage-tools supports + +| Ubuntu Release | Build Architectures | Support Level | +| -------------- | -------------------------------------------------- | -------------------------- | +| Trusty | amd64, arm64, armhf, i386, powerpc, ppc64el | Last release 19.6 | +| Xenial | amd64, arm64, armhf, i386, powerpc, ppc64el, s390x | Active SRU of all features | +| Bionic | amd64, arm64, armhf, i386, ppc64el, s390x | Active SRU of all features | +| Focal | amd64, arm64, armhf, ppc64el, riscv64, s390x | Active SRU of all features | +| Groovy | amd64, arm64, armhf, ppc64el, riscv64, s390x | Last release 27.1 | +| Hirsute | amd64, arm64, armhf, ppc64el, riscv64, s390x | Last release 27.5 | +| Impish | amd64, arm64, armhf, ppc64el, riscv64, s390x | Active SRU of all features | + +Note: ppc64el will not have all APT messaging due to insufficient golang support diff -Nru ubuntu-advantage-tools-27.8~16.04.1/docs/tutorials/create_a_fips_docker_image.md ubuntu-advantage-tools-27.9~16.04.1/docs/tutorials/create_a_fips_docker_image.md --- ubuntu-advantage-tools-27.8~16.04.1/docs/tutorials/create_a_fips_docker_image.md 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/docs/tutorials/create_a_fips_docker_image.md 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,99 @@ +# Create an Ubuntu FIPS Docker image + +> Requires UA Client version 27.7 + +## Step 1: Acquire your Ubuntu Advantage (UA) token + +Your UA token can be found on your Ubuntu Advantage dashboard. To access your dashboard, you need an [Ubuntu One](https://login.ubuntu.com/) account. If you purchased a UA subscription and don't yet have an Ubuntu One account, be sure to use the same email address used to purchase your subscription. If you haven't purchased a UA subscription, don't worry! You get a free token for personal use with your Ubuntu One account, no purchase necessary. + +The Ubuntu One account functions as a Single Sign On, so once logged in we can go straight to the Ubuntu Advantage dashboard at [ubuntu.com/advantage](https://ubuntu.com/advantage). Then we should see a list of our subscriptions (including the free for personal use subscription) in the left-hand column. Click on the subscription that you wish to use for this tutorial if it is not already selected. On the right we will now see the details of our subscription including our secret token under the "Subscription" header next to the "🔗" symbol. + +> Warning: The UA token should be kept secret. It is used to uniquely identify your Ubuntu Advantage subscription. + +## Step 2: Create a UA Attach Config file + +First create a directory for this tutorial. + +```bash +mkdir ua_fips_tutorial +cd ua_fips_tutorial +``` + +Create a file named `ua-attach-config.yaml`. + +```bash +touch ua-attach-config.yaml +``` + +Edit the file and add the following contents: + +```yaml +token: YOUR_TOKEN +enable_services: + - fips +``` + +Replace `YOUR_TOKEN` with the UA token we got from [ubuntu.com/advantage](https://ubuntu.com/advantage) in Step 1. + +## Step 3: Create a Dockerfile + +Create a file named `Dockerfile`. + +```bash +touch Dockerfile +``` + +Edit the file and add the following contents: + +```dockerfile +FROM ubuntu:focal + +RUN --mount=type=secret,id=ua-attach-config \ + apt-get update \ + && apt-get install --no-install-recommends -y ubuntu-advantage-tools ca-certificates \ + && ua attach --attach-config /run/secrets/ua-attach-config \ + + && apt-get upgrade -y \ + && apt-get install -y openssl libssl1.1 libssl1.1-hmac libgcrypt20 libgcrypt20-hmac strongswan strongswan-hmac openssh-client openssh-server \ + + && apt-get purge --auto-remove -y ubuntu-advantage-tools ca-certificates \ + && rm -rf /var/lib/apt/lists/* +``` + +This Dockerfile will enable FIPS in the container, upgrade all packages and install the FIPS version of `openssl`. For more details on how this works, see [How to Enable UA Services in a Dockerfile](../howtoguides/enable_ua_in_dockerfile.md) + +## Step 4: Build the Docker image + +Build the docker image with the following command: + +```bash +DOCKER_BUILDKIT=1 docker build . --secret id=ua-attach-config,src=ua-attach-config.yaml -t ubuntu-bionic-fips +``` + +This will pass the attach-config as a [BuildKit Secret](https://docs.docker.com/develop/develop-images/build_enhancements/#new-docker-build-secret-information) so that the finished docker image will not contain your UA token. + +## Step 5: Test the Docker image + +> Warning: The docker image isn't considered fully FIPS compliant unless it is running on a host Ubuntu machine that is FIPS compliant. + +Let's check to make sure the FIPS version of openssl is installed in the container. + +```bash +docker run -it ubuntu-bionic-fips dpkg-query --show openssl +``` +Should show something like `openssl 1.1.1-1ubuntu2.fips.2.1~18.04.6.2` (notice "fips" in the version name). + +We can now use the build docker image's FIPS compliant `openssl` to connect to `https://ubuntu.com`. + +```bash +docker run -it ubuntu-bionic-fips sh -c "echo | openssl s_client -connect ubuntu.com:443" +``` + +That should print information about the certificates of ubuntu.com and the algorithms used during the TLS handshake. + + +## Success + +That's it! You could now push this image to a private registry and use it as the base of other docker images using `FROM`. + +If you want to learn more about how the steps in this tutorial work, take a look at the more generic [How to Enable UA Services in a Dockerfile](../howtoguides/enable_ua_in_dockerfile.md). diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/attached_commands.feature ubuntu-advantage-tools-27.9~16.04.1/features/attached_commands.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/attached_commands.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/attached_commands.feature 2022-05-18 19:44:15.000000000 +0000 @@ -16,6 +16,7 @@ """ Successfully processed your ua configuration. Successfully refreshed your subscription. + Successfully updated UA related APT and MOTD messages. """ When I run `ua refresh config` with sudo Then I will see the following on stdout: @@ -27,6 +28,11 @@ """ Successfully refreshed your subscription. """ + When I run `ua refresh messages` with sudo + Then I will see the following on stdout: + """ + Successfully updated UA related APT and MOTD messages. + """ When I run `python3 /usr/lib/ubuntu-advantage/timer.py` with sudo And I run `sh -c "ls /var/log/ubuntu-advantage* | sort -d"` as non-root Then stdout matches regexp: @@ -132,9 +138,11 @@ | xenial | cc-eal, cis, esm-apps, esm-infra, fips, fips-updates, livepatch,\nrealtime-kernel, ros, ros-updates. | | bionic | cc-eal, cis, esm-apps, esm-infra, fips, fips-updates, livepatch,\nrealtime-kernel, ros, ros-updates. | | focal | cc-eal, esm-apps, esm-infra, fips, fips-updates, livepatch, realtime-kernel,\nros, ros-updates, usg. | + | jammy | cc-eal, cis, esm-apps, esm-infra, fips, fips-updates, livepatch,\nrealtime-kernel, ros, ros-updates. | @series.xenial @series.bionic + @series.jammy @uses.config.machine_type.lxd.container Scenario Outline: Attached disable of a service in a ubuntu machine Given a `` machine with ubuntu-advantage-tools installed @@ -170,8 +178,9 @@ Examples: ubuntu release | release | - | bionic | | xenial | + | bionic | + | jammy | @series.focal @uses.config.machine_type.lxd.container @@ -240,7 +249,7 @@ esm-infra +yes +UA Infra: Extended Security Maintenance \(ESM\) fips + +NIST-certified core packages fips-updates + +NIST-certified core packages with priority security updates - livepatch +yes +Canonical Livepatch service + livepatch +(yes|no) +Canonical Livepatch service realtime-kernel + +Beta-version Ubuntu Kernel with PREEMPT_RT patches ros + +Security Updates for the Robot Operating System ros-updates + +All Updates for the Robot Operating System @@ -285,6 +294,7 @@ | xenial | yes | yes | yes | yes | yes | yes | cis | no | | bionic | yes | yes | yes | yes | yes | yes | cis | no | | focal | yes | no | yes | yes | yes | no | usg | no | + | jammy | yes | no | no | no | no | no | cis | yes | @series.all @uses.config.machine_type.lxd.container @@ -427,8 +437,9 @@ Examples: ubuntu release | release | - | bionic | | xenial | + | bionic | + | jammy | @series.focal @uses.config.machine_type.lxd.container @@ -572,7 +583,7 @@ | bionic | enabled | | xenial | enabled | | impish | n/a | - | jammy | n/a | + | jammy | enabled | @series.focal @uses.config.machine_type.lxd.container @@ -686,7 +697,7 @@ Updating package lists APT update failed. APT update failed to read APT config for the following URL: - - http://ppa.launchpad.net/cloud-init-dev/daily/ubun + - http(s)?://ppa.launchpad(content)?.net/cloud-init-dev/daily/ubun """ Examples: ubuntu release @@ -694,6 +705,7 @@ | xenial | cloud-init-dev-ubuntu-daily-xenial | | bionic | cloud-init-dev-ubuntu-daily-bionic | | focal | cloud-init-dev-ubuntu-daily-focal | + | jammy | cloud-init-dev-ubuntu-daily-jammy | @series.all @uses.config.machine_type.lxd.container @@ -821,14 +833,12 @@ ua-auto-attach.path.txt(-error)? ua-auto-attach.service.txt(-error)? uaclient.conf - ua-license-check.path.txt - ua-license-check.service.txt - ua-license-check.timer.txt ua-reboot-cmds.service.txt ua-status.json ua-timer.service.txt ua-timer.timer.txt ubuntu-advantage.log + ubuntu-advantage.service.txt ubuntu-advantage-timer.log ubuntu-esm-apps.list ubuntu-esm-infra.list @@ -838,6 +848,7 @@ | xenial | | bionic | | focal | + | jammy | @series.lts @uses.config.machine_type.lxd.container @@ -908,6 +919,7 @@ | xenial | | bionic | | focal | + | jammy | @series.xenial @series.bionic diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/attached_enable.feature ubuntu-advantage-tools-27.9~16.04.1/features/attached_enable.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/attached_enable.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/attached_enable.feature 2022-05-18 19:44:15.000000000 +0000 @@ -46,13 +46,76 @@ CC EAL2 is not available for Ubuntu (). """ Examples: ubuntu release - | release | version | full_name | - | focal | 20.04 LTS | Focal Fossa | - | impish | 21.10 | Impish Indri | + | release | version | full_name | + | focal | 20.04 LTS | Focal Fossa | + | impish | 21.10 | Impish Indri | + | jammy | 22.04 | Jammy Jellyfish | - @series.xenial - @series.bionic - @series.focal + @series.lts + @uses.config.machine_type.lxd.container + Scenario Outline: Empty series affordance means no series, null means all series + Given a `` machine with ubuntu-advantage-tools installed + When I attach `contract_token` with sudo and options `--no-auto-enable` + When I create the file `/tmp/machine-token-overlay.json` with the following: + """ + { + "machineTokenInfo": { + "contractInfo": { + "resourceEntitlements": [ + { + "type": "esm-infra", + "affordances": { + "series": [] + } + } + ] + } + } + } + """ + And I append the following on uaclient config: + """ + features: + machine_token_overlay: "/tmp/machine-token-overlay.json" + """ + When I verify that running `ua enable esm-infra` `with sudo` exits `1` + Then stdout matches regexp: + """ + One moment, checking your subscription first + UA Infra: ESM is not available for Ubuntu .* + """ + When I create the file `/tmp/machine-token-overlay.json` with the following: + """ + { + "machineTokenInfo": { + "contractInfo": { + "resourceEntitlements": [ + { + "type": "esm-infra", + "affordances": { + "series": null + } + } + ] + } + } + } + """ + When I verify that running `ua enable esm-infra` `with sudo` exits `0` + Then stdout matches regexp: + """ + One moment, checking your subscription first + Updating package lists + UA Infra: ESM enabled + """ + Examples: ubuntu release + | release | + | xenial | + | bionic | + | focal | + | jammy | + + @series.lts @uses.config.machine_type.lxd.container Scenario Outline: Attached enable of different services using json format Given a `` machine with ubuntu-advantage-tools installed @@ -120,6 +183,7 @@ | xenial | cc-eal, cis, esm-infra, fips, fips-updates, livepatch. | | bionic | cc-eal, cis, esm-infra, fips, fips-updates, livepatch. | | focal | cc-eal, esm-infra, fips, fips-updates, livepatch, usg. | + | jammy | cc-eal, cis, esm-infra, fips, fips-updates, livepatch. | @series.lts @uses.config.machine_type.lxd.container @@ -174,8 +238,8 @@ Examples: ubuntu release | release | infra-pkg | esm-infra-url | - | bionic | libkrad0 | https://esm.ubuntu.com/infra/ubuntu | | xenial | libkrad0 | https://esm.ubuntu.com/infra/ubuntu | + | bionic | libkrad0 | https://esm.ubuntu.com/infra/ubuntu | @series.focal @uses.config.machine_type.lxd.container @@ -293,9 +357,10 @@ Examples: not entitled services | release | + | xenial | | bionic | | focal | - | xenial | + | jammy | @series.xenial @series.bionic @@ -589,7 +654,7 @@ When I run `ua status` with sudo Then stdout matches regexp: """ - livepatch yes enabled + livepatch +yes +enabled """ When I run `canonical-livepatch status` with sudo Then stdout matches regexp: @@ -601,6 +666,8 @@ | release | | xenial | | bionic | + | focal | + | jammy | @slow @series.bionic diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/attach_invalidtoken.feature ubuntu-advantage-tools-27.9~16.04.1/features/attach_invalidtoken.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/attach_invalidtoken.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/attach_invalidtoken.feature 2022-05-18 19:44:15.000000000 +0000 @@ -16,7 +16,8 @@ This command must be run as root (try using sudo). """ When I verify that running `ua attach invalid-token --format json` `with sudo` exits `1` - Then I will see the following on stdout: + Then stdout is a json matching the `ua_operation` schema + And I will see the following on stdout: """ {"_schema_version": "0.1", "errors": [{"message": "Invalid token. See https://ubuntu.com/advantage", "message_code": "attach-invalid-token", "service": null, "type": "system"}], "failed_services": [], "needs_reboot": false, "processed_services": [], "result": "failure", "warnings": []} """ @@ -29,9 +30,9 @@ | impish | | jammy | - @uses.config.contract_token_staging_expired @series.all @uses.config.machine_type.lxd.container + @uses.config.contract_token_staging_expired Scenario Outline: Attach command failure on expired token Given a `` machine with ubuntu-advantage-tools installed When I attempt to attach `contract_token_staging_expired` with sudo @@ -41,6 +42,13 @@ Contract ".*" .* Visit https://ubuntu.com/advantage to manage contract tokens. """ + When I verify that running attach `with sudo` using expired token with json response fails + Then stdout is a json matching the `ua_operation` schema + And I will see the following on stdout: + """ + {"_schema_version": "0.1", "errors": [{"additional_info": {"contract_expiry_date": "12-31-2019", "contract_id": "cAJ4NHcl2qAld2CbJt5cufzZNHgVZ0YTPIH96Ihsy4bU"}, "message": "Attach denied:\nContract \"cAJ4NHcl2qAld2CbJt5cufzZNHgVZ0YTPIH96Ihsy4bU\" expired on December 31, 2019\nVisit https://ubuntu.com/advantage to manage contract tokens.", "message_code": "attach-forbidden-expired", "service": null, "type": "system"}], "failed_services": [], "needs_reboot": false, "processed_services": [], "result": "failure", "warnings": []} + """ + Examples: ubuntu release | release | | xenial | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/attach_validtoken.feature ubuntu-advantage-tools-27.9~16.04.1/features/attach_validtoken.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/attach_validtoken.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/attach_validtoken.feature 2022-05-18 19:44:15.000000000 +0000 @@ -2,7 +2,6 @@ Feature: Command behaviour when attaching a machine to an Ubuntu Advantage subscription using a valid token - @series.jammy @series.impish @uses.config.machine_type.lxd.container Scenario Outline: Attached command in a non-lts ubuntu machine @@ -24,15 +23,20 @@ Examples: ubuntu release | release | | impish | - | jammy | @series.lts @uses.config.machine_type.lxd.container Scenario Outline: Attach command in a ubuntu lxd container Given a `` machine with ubuntu-advantage-tools installed When I run `apt-get update` with sudo, retrying exit [100] + And I run `apt install update-motd` with sudo, retrying exit [100] And I run `DEBIAN_FRONTEND=noninteractive apt-get install -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" -y ` with sudo, retrying exit [100] - And I run `run-parts /etc/update-motd.d/` with sudo + And I run `ua refresh messages` with sudo + Then stdout matches regexp: + """ + Successfully updated UA related APT and MOTD messages. + """ + When I run `update-motd` with sudo Then if `` in `xenial` and stdout matches regexp: """ \d+ update(s)? can be applied immediately. @@ -65,13 +69,13 @@ """ esm-apps +yes +enabled +UA Apps: Extended Security Maintenance \(ESM\) esm-infra +yes +enabled +UA Infra: Extended Security Maintenance \(ESM\) - fips +yes +disabled +NIST-certified core packages - fips-updates +yes +disabled +NIST-certified core packages with priority security updates - livepatch +yes +n/a +Canonical Livepatch service + fips +yes + +NIST-certified core packages + fips-updates +yes + +NIST-certified core packages with priority security updates + livepatch +yes +n/a + """ And stdout matches regexp: """ - +yes +disabled +Security compliance and audit tools + +yes + +Security compliance and audit tools """ And stderr matches regexp: """ @@ -90,9 +94,7 @@ allow_beta: true """ And I run `apt update` with sudo - And I delete the file `/var/lib/ubuntu-advantage/jobs-status.json` - And I run `python3 /usr/lib/ubuntu-advantage/timer.py` with sudo - And I run `apt install update-motd` with sudo, retrying exit [100] + And I run `ua refresh messages` with sudo And I run `update-motd` with sudo Then if `` in `focal` and stdout matches regexp: """ @@ -165,11 +167,13 @@ https:\/\/ubuntu.com\/advantage """ + Examples: ubuntu release packages - | release | downrev_pkg | cc_status | cis_or_usg | - | xenial | libkrad0=1.13.2+dfsg-5 | disabled | cis | - | bionic | libkrad0=1.16-2build1 | disabled | cis | - | focal | hello=2.10-2ubuntu2 | n/a | usg | + | release | downrev_pkg | cc_status | cis_or_usg | cis | fips | livepatch_desc | + | xenial | libkrad0=1.13.2+dfsg-5 | disabled | cis | disabled | disabled | Canonical Livepatch service | + | bionic | libkrad0=1.16-2build1 | disabled | cis | disabled | disabled | Canonical Livepatch service | + | focal | hello=2.10-2ubuntu2 | n/a | usg | disabled | disabled | Canonical Livepatch service | + | jammy | hello=2.10-2ubuntu4 | n/a | cis | n/a | n/a | Available with the HWE kernel | @series.lts @uses.config.machine_type.lxd.container @@ -327,7 +331,7 @@ """ And stdout matches regexp: """ - +yes +disabled +Security compliance and audit tools + +yes + +Security compliance and audit tools """ And stderr matches regexp: """ @@ -335,10 +339,11 @@ """ Examples: ubuntu release livepatch status - | release | fips_status |lp_status | lp_desc | cc_status | cis_or_usg | - | xenial | disabled |enabled | Canonical Livepatch service | disabled | cis | - | bionic | disabled |enabled | Canonical Livepatch service | disabled | cis | - | focal | disabled |enabled | Canonical Livepatch service | n/a | usg | + | release | fips_status |lp_status | lp_desc | cc_status | cis_or_usg | cis_status | + | xenial | disabled |enabled | Canonical Livepatch service | disabled | cis | disabled | + | bionic | disabled |enabled | Canonical Livepatch service | disabled | cis | disabled | + | focal | disabled |enabled | Canonical Livepatch service | n/a | usg | disabled | + | jammy | n/a |enabled | Canonical Livepatch service | n/a | cis | n/a | @series.all @uses.config.machine_type.azure.generic @@ -387,7 +392,7 @@ """ And stdout matches regexp: """ - +yes +disabled +Security compliance and audit tools + +yes + +Security compliance and audit tools """ And stderr matches regexp: """ @@ -395,10 +400,11 @@ """ Examples: ubuntu release livepatch status - | release | lp_status | fips_status | cc_status | cis_or_usg | - | xenial | enabled | n/a | disabled | cis | - | bionic | enabled | disabled | disabled | cis | - | focal | enabled | disabled | n/a | usg | + | release | lp_status | fips_status | cc_status | cis_or_usg | cis_status | + | xenial | enabled | n/a | disabled | cis | disabled | + | bionic | enabled | disabled | disabled | cis | disabled | + | focal | enabled | disabled | n/a | usg | disabled | + | jammy | enabled | n/a | n/a | cis | n/a | @series.all @uses.config.machine_type.gcp.generic @@ -447,7 +453,7 @@ """ And stdout matches regexp: """ - +yes +disabled +Security compliance and audit tools + +yes + +Security compliance and audit tools """ And stderr matches regexp: """ @@ -455,10 +461,11 @@ """ Examples: ubuntu release livepatch status - | release | lp_status | fips_status | cc_status | cis_or_usg | - | xenial | n/a | n/a | disabled | cis | - | bionic | n/a | disabled | disabled | cis | - | focal | enabled | disabled | n/a | usg | + | release | lp_status | fips_status | cc_status | cis_or_usg | cis_status | + | xenial | n/a | n/a | disabled | cis | disabled | + | bionic | n/a | disabled | disabled | cis | disabled | + | focal | enabled | disabled | n/a | usg | disabled | + | jammy | enabled | n/a | n/a | cis | n/a | @series.all @uses.config.machine_type.lxd.container @@ -484,9 +491,6 @@ """ esm-apps +yes +enabled +UA Apps: Extended Security Maintenance \(ESM\) esm-infra +yes +enabled +UA Infra: Extended Security Maintenance \(ESM\) - fips +yes +disabled +NIST-certified core packages - fips-updates +yes +disabled +NIST-certified core packages with priority security updates - livepatch +yes +n/a +Canonical Livepatch service """ Examples: ubuntu release @@ -494,3 +498,61 @@ | xenial | disabled | | bionic | disabled | | focal | n/a | + | jammy | n/a | + + @series.all + @uses.config.machine_type.lxd.container + Scenario Outline: Attach and Check for contract change in status checking + Given a `` machine with ubuntu-advantage-tools installed + When I attach `contract_token` with sudo + Then stdout matches regexp: + """ + UA Infra: ESM enabled + """ + And stdout matches regexp: + """ + This machine is now attached to + """ + And stdout matches regexp: + """ + esm-infra +yes +enabled +UA Infra: Extended Security Maintenance \(ESM\) + """ + When I create the file `/tmp/machine-token-overlay.json` with the following: + """ + { + "machineTokenInfo": { + "contractInfo": { + "effectiveTo": "2000-01-02T03:04:05Z" + } + } + } + """ + And I append the following on uaclient config: + """ + features: + machine_token_overlay: "/tmp/machine-token-overlay.json" + """ + When I run `ua status` with sudo + Then stdout matches regexp: + """ + A change has been detected in your contract. + Please run `sudo ua refresh`. + """ + When I run `ua refresh contract` with sudo + Then stdout matches regexp: + """ + Successfully refreshed your subscription. + """ + When I run `sed -i '/^.*machine_token_overlay:/d' /etc/ubuntu-advantage/uaclient.conf` with sudo + And I run `ua status` with sudo + Then stdout does not match regexp: + """ + A change has been detected in your contract. + Please run `sudo ua refresh`. + """ + + Examples: ubuntu release livepatch status + | release | + | xenial | + | bionic | + | focal | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/aws-ids.yaml ubuntu-advantage-tools-27.9~16.04.1/features/aws-ids.yaml --- ubuntu-advantage-tools-27.8~16.04.1/features/aws-ids.yaml 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/aws-ids.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -1,6 +1,6 @@ -bionic: ami-0419d66039473da9d +bionic: ami-02b4e50c1ebb5034c bionic-fips: ami-03b75f613f80bcff1 -focal: ami-0489b8bdbbf3a3b32 +focal: ami-01deb29ae4e3b9c97 +focal-fips: ami-02782bf2569bf457c xenial: ami-011bcfe2bea365b6a xenial-fips: ami-077e4c339a098fc9f -focal-fips: ami-02782bf2569bf457c diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/cloud.py ubuntu-advantage-tools-27.9~16.04.1/features/cloud.py --- ubuntu-advantage-tools-27.8~16.04.1/features/cloud.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/cloud.py 2022-05-18 19:44:15.000000000 +0000 @@ -2,15 +2,20 @@ import logging import os import time -from typing import List, Optional, Tuple +from typing import List, Optional import pycloudlib # type: ignore +import toml import yaml +DEFAULT_CONFIG_PATH = "~/.config/pycloudlib.toml" + class Cloud: """Base class for cloud providers that should be tested through behave. + :cloud_credentials_path: + A string containing the path for the pycloudlib cloud credentials file :machine_type: A string representing the type of machine to launch (pro or generic) :region: @@ -24,12 +29,11 @@ name = "" pro_ids_path = "" - env_vars: Tuple[str, ...] = () def __init__( self, machine_type: str, - region: Optional[str] = None, + cloud_credentials_path: Optional[str], tag: Optional[str] = None, timestamp_suffix: bool = True, ) -> None: @@ -38,29 +42,27 @@ else: self.tag = "uaclient-ci" self.machine_type = machine_type - self.region = region self._api = None self.key_name = pycloudlib.util.get_timestamped_tag(self.tag) self.timestamp_suffix = timestamp_suffix + self.cloud_credentials_path = cloud_credentials_path - missing_env_vars = self.missing_env_vars() - if missing_env_vars: - logging.warning( - "".join( - [ - "UACLIENT_BEHAVE_MACHINE_TYPE={} requires".format( - self.machine_type - ), - " the following env vars:\n", - *self.format_missing_env_vars(missing_env_vars), - ] - ) - ) + @property + def pycloudlib_cls(self): + """Return the pycloudlib cls to be used as an api.""" + raise NotImplementedError @property def api(self) -> pycloudlib.cloud.BaseCloud: """Return the api used to interact with the cloud provider.""" - raise NotImplementedError + if self._api is None: + self._api = self.pycloudlib_cls( + config_file=self.cloud_credentials_path, + tag=self.tag, + timestamp_suffix=self.timestamp_suffix, + ) + + return self._api def _create_instance( self, @@ -178,28 +180,6 @@ """ return instance.id - def format_missing_env_vars(self, missing_env_vars: List) -> List[str]: - """Format missing env vars to be displayed in log. - - :returns: - A list of env string formatted to be used when logging - """ - return [" - {}\n".format(env_var) for env_var in missing_env_vars] - - def missing_env_vars(self) -> List[str]: - """Return a list of env variables necessary for this cloud provider. - - :returns: - A list of string representing the missing variables - """ - return [ - env_name - for env_name in self.env_vars - if not getattr( - self, env_name.lower().replace("uaclient_behave_", "") - ) - ] - def locate_image_name(self, series: str) -> str: """Locate and return the image name to use for vm provision. @@ -260,61 +240,15 @@ class EC2(Cloud): - """Class that represents the EC2 cloud provider. - - :param aws_access_key_id: - The aws access key id - :param aws_secret_access_key: - The aws secret access key - :region: - The region to be used to create the aws instances - :machine_type: - A string representing the type of machine to launch (pro or generic) - :tag: - A tag to be used when creating the resources on the cloud provider - :timestamp_suffix: - Boolean set true to direct pycloudlib to append a timestamp to the end - of the provided tag. - """ + """Class that represents the EC2 cloud provider.""" name = "aws" - env_vars: Tuple[str, ...] = ("aws_access_key_id", "aws_secret_access_key") pro_ids_path = "features/aws-ids.yaml" - def __init__( - self, - aws_access_key_id: Optional[str], - aws_secret_access_key: Optional[str], - machine_type: str, - region: Optional[str] = "us-east-2", - tag: Optional[str] = None, - timestamp_suffix: bool = True, - ) -> None: - self.aws_access_key_id = aws_access_key_id - self.aws_secret_access_key = aws_secret_access_key - logging.basicConfig( - filename="pycloudlib-behave.log", level=logging.DEBUG - ) - super().__init__( - region=region, - machine_type=machine_type, - tag=tag, - timestamp_suffix=timestamp_suffix, - ) - @property - def api(self) -> pycloudlib.cloud.BaseCloud: - """Return the api used to interact with the cloud provider.""" - if self._api is None: - self._api = pycloudlib.EC2( - tag=self.tag, - access_key_id=self.aws_access_key_id, - secret_access_key=self.aws_secret_access_key, - region=self.region, - timestamp_suffix=self.timestamp_suffix, - ) - - return self._api + def pycloudlib_cls(self): + """Return the pycloudlib cls to be used as an api.""" + return pycloudlib.EC2 def manage_ssh_key( self, @@ -397,73 +331,15 @@ class Azure(Cloud): - """Class that represents the Azure cloud provider. - - :param az_client_id: - The Azure client id - :param az_client_secret - The Azure client secret - :param az_tenant_id: - The Azure tenant id - :param az_subscription_id: - The Azure subscription id - :machine_type: - A string representing the type of machine to launch (pro or generic) - :region: - The region to create the resources on - :tag: - A tag to be used when creating the resources on the cloud provider - :timestamp_suffix: - Boolean set true to direct pycloudlib to append a timestamp to the end - of the provided tag. - """ + """Class that represents the Azure cloud provider.""" name = "Azure" - env_vars: Tuple[str, ...] = ( - "az_client_id", - "az_client_secret", - "az_tenant_id", - "az_subscription_id", - ) pro_ids_path = "features/azure-ids.yaml" - def __init__( - self, - machine_type: str, - region: Optional[str] = "centralus", - tag: Optional[str] = None, - timestamp_suffix: bool = True, - az_client_id: Optional[str] = None, - az_client_secret: Optional[str] = None, - az_tenant_id: Optional[str] = None, - az_subscription_id: Optional[str] = None, - ) -> None: - self.az_client_id = az_client_id - self.az_client_secret = az_client_secret - self.az_tenant_id = az_tenant_id - self.az_subscription_id = az_subscription_id - - super().__init__( - machine_type=machine_type, - region=region, - tag=tag, - timestamp_suffix=timestamp_suffix, - ) - @property - def api(self) -> pycloudlib.cloud.BaseCloud: - """Return the api used to interact with the cloud provider.""" - if self._api is None: - self._api = pycloudlib.Azure( - tag=self.tag, - client_id=self.az_client_id, - client_secret=self.az_client_secret, - tenant_id=self.az_tenant_id, - subscription_id=self.az_subscription_id, - timestamp_suffix=self.timestamp_suffix, - ) - - return self._api + def pycloudlib_cls(self): + """Return the pycloudlib cls to be used as an api.""" + return pycloudlib.Azure def get_instance_id( self, instance: pycloudlib.instance.BaseInstance @@ -552,6 +428,7 @@ ) inst = self.api.launch( image_id=image_name, + instance_type="Standard_A2_v2", user_data=user_data, inbound_ports=inbound_ports, ) @@ -559,57 +436,61 @@ class GCP(Cloud): + """Class that represents the Google Cloud Platform cloud provider.""" + name = "gcp" pro_ids_path = "features/gcp-ids.yaml" + cls_type = pycloudlib.GCE - """Class that represents the Google Cloud Platform cloud provider. - - :param gcp_credentials_path - The GCP credentials path to use when authentiacting to GCP - :param gcp_project - The name of the GCP project to be used - :machine_type: - A string representing the type of machine to launch (pro or generic) - :region: - The region to create the resources on - :tag: - A tag to be used when creating the resources on the cloud provider - :timestamp_suffix: - Boolean set true to direct pycloudlib to append a timestamp to the end - of the provided tag. - """ - - env_vars: Tuple[str, ...] = ("gcp_credentials_path", "gcp_project") + @property + def pycloudlib_cls(self): + """Return the pycloudlib cls to be used as an api.""" + return pycloudlib.GCE def __init__( self, machine_type: str, - region: Optional[str] = "us-west2", + cloud_credentials_path: Optional[str], tag: Optional[str] = None, timestamp_suffix: bool = True, - zone: Optional[str] = "a", - gcp_credentials_path: Optional[str] = None, - gcp_project: Optional[str] = None, ) -> None: - self.gcp_credentials_path = gcp_credentials_path - self.gcp_project = gcp_project - self.zone = zone - super().__init__( machine_type=machine_type, - region=region, + cloud_credentials_path=cloud_credentials_path, tag=tag, timestamp_suffix=timestamp_suffix, ) - self._set_service_account_email() def _set_service_account_email(self): """Set service account email if credentials provided.""" + credentials_path = ( + self.cloud_credentials_path + or os.getenv("PYCLOUDLIB_CONFIG") + or DEFAULT_CONFIG_PATH + ) json_credentials = {} - if self.gcp_credentials_path: - with open(self.gcp_credentials_path, "r") as f: + try: + credentials = toml.load(os.path.expanduser(credentials_path)) + except toml.TomlDecodeError: + raise ValueError( + "Could not parse configuration file pointed to by " + "{}".format(credentials_path) + ) + + # Use service_account_email from pycloudlib.toml if defined + self.service_account_email = credentials.get("gce", {}).get( + "service_account_email" + ) + if self.service_account_email: + return + + gcp_credentials_path = credentials.get("gce", {}).get( + "credentials_path" + ) + if gcp_credentials_path: + with open(gcp_credentials_path, "r") as f: json_credentials = json.load(f) self.service_account_email = json_credentials.get("client_email") @@ -618,13 +499,10 @@ def api(self) -> pycloudlib.cloud.BaseCloud: """Return the api used to interact with the cloud provider.""" if self._api is None: - self._api = pycloudlib.GCE( + self._api = self.pycloudlib_cls( + config_file=self.cloud_credentials_path, tag=self.tag, timestamp_suffix=self.timestamp_suffix, - credentials_path=self.gcp_credentials_path, - project=self.gcp_project, - zone=self.zone, - region=self.region, service_account_email=self.service_account_email, ) @@ -672,25 +550,6 @@ class _LXD(Cloud): name = "_lxd" - def __init__( - self, - machine_type: str, - region: Optional[str] = None, - tag: Optional[str] = None, - timestamp_suffix: bool = True, - ) -> None: - super().__init__( - machine_type=machine_type, - region=region, - tag=tag, - timestamp_suffix=timestamp_suffix, - ) - - @property - def pycloudlib_cls(self): - """Return the pycloudlib cls to be used as an api.""" - raise NotImplementedError - def _create_instance( self, series: str, @@ -779,16 +638,6 @@ image_name = self.api.daily_image(release=series) return image_name - @property - def api(self) -> pycloudlib.cloud.BaseCloud: - """Return the api used to interact with the cloud provider.""" - if self._api is None: - self._api = self.pycloudlib_cls( - tag=self.tag, timestamp_suffix=self.timestamp_suffix - ) - - return self._api - class LXDVirtualMachine(_LXD): name = "lxd-virtual-machine" diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/daemon.feature ubuntu-advantage-tools-27.9~16.04.1/features/daemon.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/daemon.feature 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/daemon.feature 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,348 @@ +Feature: Pro Upgrade Daemon only runs in environments where necessary + + @series.all + @uses.config.contract_token + @uses.config.machine_type.lxd.container + Scenario Outline: cloud-id-shim service is not installed on anything other than xenial + Given a `` machine with ubuntu-advantage-tools installed + Then I verify that running `systemctl status ubuntu-advantage-cloud-id-shim.service` `with sudo` exits `4` + Then stderr matches regexp: + """ + Unit ubuntu-advantage-cloud-id-shim.service could not be found. + """ + Examples: version + | release | + | bionic | + | focal | + | impish | + | jammy | + + @series.lts + @uses.config.contract_token + @uses.config.machine_type.lxd.container + Scenario Outline: cloud-id-shim should run in postinst and on boot + Given a `` machine with ubuntu-advantage-tools installed + # verify installing ua created the cloud-id file + When I run `cat /run/cloud-init/cloud-id` with sudo + Then I will see the following on stdout + """ + lxd + """ + When I run `cat /run/cloud-init/cloud-id-lxd` with sudo + Then I will see the following on stdout + """ + lxd + """ + # verify the shim service runs on boot and creates the cloud-id file + When I reboot the `` machine + Then I verify that running `systemctl status ubuntu-advantage-cloud-id-shim.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + (code=exited, status=0/SUCCESS) + """ + When I run `cat /run/cloud-init/cloud-id` with sudo + Then I will see the following on stdout + """ + lxd + """ + When I run `cat /run/cloud-init/cloud-id-lxd` with sudo + Then I will see the following on stdout + """ + lxd + """ + Examples: version + | release | + | xenial | + + @series.lts + @uses.config.contract_token + @uses.config.machine_type.gcp.generic + Scenario Outline: daemon should run when appropriate on gcp generic lts + Given a `` machine with ubuntu-advantage-tools installed + # verify its enabled, but stops itself when not configured to poll + When I run `cat /var/log/ubuntu-advantage-daemon.log` with sudo + Then stdout matches regexp: + """ + daemon starting + """ + Then stdout matches regexp: + """ + Configured to not poll for pro license, shutting down + """ + Then stdout matches regexp: + """ + daemon ending + """ + When I run `systemctl is-enabled ubuntu-advantage.service` with sudo + Then stdout matches regexp: + """ + enabled + """ + Then I verify that running `systemctl is-failed ubuntu-advantage.service` `with sudo` exits `1` + Then stdout matches regexp: + """ + inactive + """ + + # verify it stays on when configured to do so + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + poll_for_pro_license: true + """ + # Turn on memory accounting + When I run `sed -i s/#DefaultMemoryAccounting=no/DefaultMemoryAccounting=yes/ /etc/systemd/system.conf` with sudo + When I run `systemctl daemon-reexec` with sudo + + When I run `truncate -s 0 /var/log/ubuntu-advantage-daemon.log` with sudo + When I run `systemctl restart ubuntu-advantage.service` with sudo + + # wait to get memory after it has settled/after startup checks + When I wait `5` seconds + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `0` + Then stdout matches regexp: + """ + Active: active \(running\) + """ + Then on `xenial`, systemd status output says memory usage is less than `14` MB + Then on `bionic`, systemd status output says memory usage is less than `13` MB + Then on `focal`, systemd status output says memory usage is less than `11` MB + Then on `jammy`, systemd status output says memory usage is less than `12` MB + + When I run `cat /var/log/ubuntu-advantage-daemon.log` with sudo + Then stdout matches regexp: + """ + daemon starting + """ + Then stdout does not match regexp: + """ + daemon ending + """ + When I run `systemctl is-enabled ubuntu-advantage.service` with sudo + Then stdout matches regexp: + """ + enabled + """ + Then I verify that running `systemctl is-failed ubuntu-advantage.service` `with sudo` exits `1` + Then stdout matches regexp: + """ + active + """ + + # verify attach stops it immediately and doesn't restart after reboot + When I attach `contract_token` with sudo + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + """ + When I reboot the `` machine + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + \s*Condition: start condition failed.* + .*ConditionPathExists=!/var/lib/ubuntu-advantage/private/machine-token.json was not met + """ + + # verify detach starts it and it starts again after reboot + When I run `truncate -s 0 /var/log/ubuntu-advantage-daemon.log` with sudo + When I run `ua detach --assume-yes` with sudo + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `0` + Then stdout matches regexp: + """ + Active: active \(running\) + """ + When I run `cat /var/log/ubuntu-advantage-daemon.log` with sudo + Then stdout matches regexp: + """ + daemon starting + """ + Then stdout does not match regexp: + """ + daemon ending + """ + When I reboot the `` machine + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `0` + Then stdout matches regexp: + """ + Active: active \(running\) + """ + When I run `cat /var/log/ubuntu-advantage-daemon.log` with sudo + Then stdout matches regexp: + """ + daemon starting + """ + Then stdout does not match regexp: + """ + daemon ending + """ + + # Verify manual stop & disable persists across reconfigure + When I run `systemctl stop ubuntu-advantage.service` with sudo + When I run `systemctl disable ubuntu-advantage.service` with sudo + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + """ + When I run `dpkg-reconfigure ubuntu-advantage-tools` with sudo + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + """ + + # Verify manual stop & disable persists across reboot + When I reboot the `` machine + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + """ + Examples: version + | release | + | xenial | + | bionic | + | focal | + | jammy | + + @series.impish + @uses.config.contract_token + @uses.config.machine_type.gcp.generic + Scenario Outline: daemon does not start on gcp generic non lts + Given a `` machine with ubuntu-advantage-tools installed + When I wait `1` seconds + When I run `cat /var/log/ubuntu-advantage-daemon.log` with sudo + Then stdout matches regexp: + """ + daemon starting + """ + Then stdout matches regexp: + """ + Not on LTS, shutting down + """ + Then stdout matches regexp: + """ + daemon ending + """ + Examples: version + | release | + | impish | + + @series.all + @uses.config.contract_token + @uses.config.machine_type.lxd.container + @uses.config.machine_type.lxd.vm + @uses.config.machine_type.aws.generic + @uses.config.machine_type.azure.generic + Scenario Outline: daemon does not start when not on gcpgeneric + Given a `` machine with ubuntu-advantage-tools installed + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + \s*Condition: start condition failed.* + .*ConditionPathExists=/run/cloud-init/cloud-id-gce was not met + """ + Then I verify that running `cat /var/log/ubuntu-advantage-daemon.log` `with sudo` exits `1` + When I attach `contract_token` with sudo + When I run `ua detach --assume-yes` with sudo + When I reboot the `` machine + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + \s*Condition: start condition failed.* + .*ConditionPathExists=/run/cloud-init/cloud-id-gce was not met + """ + Then I verify that running `cat /var/log/ubuntu-advantage-daemon.log` `with sudo` exits `1` + Examples: version + | release | + | xenial | + | bionic | + | focal | + | impish | + | jammy | + + @series.lts + @uses.config.machine_type.aws.pro + @uses.config.machine_type.azure.pro + Scenario Outline: daemon does not start when not on gcpgeneric + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + """ + When I run `ua auto-attach` with sudo + When I run `systemctl restart ubuntu-advantage.service` with sudo + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + \s*Condition: start condition failed.* + .*ConditionPathExists=/run/cloud-init/cloud-id-gce was not met + """ + Then I verify that running `cat /var/log/ubuntu-advantage-daemon.log` `with sudo` exits `1` + When I reboot the `` machine + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + \s*Condition: start condition failed.* + .*ConditionPathExists=/run/cloud-init/cloud-id-gce was not met + """ + Then I verify that running `cat /var/log/ubuntu-advantage-daemon.log` `with sudo` exits `1` + Examples: version + | release | + | xenial | + | bionic | + | focal | + + @series.lts + @uses.config.machine_type.gcp.pro + Scenario Outline: daemon does not start when not on gcpgeneric + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + """ + When I run `ua auto-attach` with sudo + When I run `truncate -s 0 /var/log/ubuntu-advantage-daemon.log` with sudo + When I run `systemctl restart ubuntu-advantage.service` with sudo + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\).* + \s*Condition: start condition failed.* + .*ConditionPathExists=!/var/lib/ubuntu-advantage/private/machine-token.json was not met + """ + When I run `cat /var/log/ubuntu-advantage-daemon.log` with sudo + Then stdout does not match regexp: + """ + daemon starting + """ + When I reboot the `` machine + Then I verify that running `systemctl status ubuntu-advantage.service` `with sudo` exits `3` + Then stdout matches regexp: + """ + Active: inactive \(dead\) + \s*Condition: start condition failed.* + .*ConditionPathExists=!/var/lib/ubuntu-advantage/private/machine-token.json was not met + """ + When I run `cat /var/log/ubuntu-advantage-daemon.log` with sudo + Then stdout does not match regexp: + """ + daemon starting + """ + Examples: version + | release | + | xenial | + | bionic | + | focal | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/docker.feature ubuntu-advantage-tools-27.9~16.04.1/features/docker.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/docker.feature 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/docker.feature 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,78 @@ +@uses.config.contract_token +Feature: Build docker images with ua services + + @slow + @docker + @series.focal + @uses.config.machine_type.lxd.vm + Scenario Outline: Build docker images with ua services + Given a `focal` machine with ubuntu-advantage-tools installed + When I have the `` debs under test in `/home/ubuntu` + When I run `apt-get install -y docker.io jq` with sudo + When I create the file `/home/ubuntu/Dockerfile` with the following: + """ + FROM ubuntu: + + COPY ./ubuntu-advantage-tools.deb /ua.deb + + RUN --mount=type=secret,id=ua-attach-config \ + apt-get update \ + && apt-get install --no-install-recommends -y ubuntu-advantage-tools ca-certificates \ + + && dpkg -i /ua.deb \ + + && ua attach --attach-config /run/secrets/ua-attach-config \ + + # Normally an apt upgrade is recommended, but we dont do that here + # in order to measure the image size bloat from just the enablement + # process + # && apt-get upgrade -y \ + + && apt-get install -y \ + + # If you need ca-certificates, remove it from this line + && apt-get purge --auto-remove -y ubuntu-advantage-tools ca-certificates \ + + && rm -rf /var/lib/apt/lists/* + """ + When I create the file `/home/ubuntu/ua-attach-config.yaml` with the following: + """ + token: + enable_services: + """ + When I replace `` in `/home/ubuntu/ua-attach-config.yaml` with token `contract_token` + + # Build succeeds + When I run shell command `DOCKER_BUILDKIT=1 docker build . --secret id=ua-attach-config,src=ua-attach-config.yaml -t ua-test` with sudo + + # Bloat is minimal (new size == original size + deb size + test package size) + Then docker image `ua-test` is not significantly larger than `ubuntu:` with `` installed + + # No secrets or artifacts leftover + Then `90ubuntu-advantage` is not present in any docker image layer + Then `machine-token.json` is not present in any docker image layer + Then `ubuntu-advantage.log` is not present in any docker image layer + Then `uaclient.conf` is not present in any docker image layer + + # Service successfully enabled (Correct version of package installed) + When I run `docker run ua-test dpkg-query --showformat='${Version}' --show ` with sudo + Then stdout matches regexp: + """ + + """ + + # Invalid attach config file causes build to fail + When I create the file `/home/ubuntu/ua-attach-config.yaml` with the following: + """ + token: + enable_services: { fips: true } + """ + When I replace `` in `/home/ubuntu/ua-attach-config.yaml` with token `contract_token` + Then I verify that running `DOCKER_BUILDKIT=1 docker build . --no-cache --secret id=ua-attach-config,src=ua-attach-config.yaml -t ua-test` `with sudo` exits `1` + + Examples: ubuntu release + | release | container_release |enable_services | test_package_name | test_package_version | + | focal | xenial | [ esm-infra ] | curl | esm | + | focal | bionic | [ fips ] | openssl | fips | + | focal | focal | [ esm-apps ] | hello | esm | + diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/enable_fips_cloud.feature ubuntu-advantage-tools-27.9~16.04.1/features/enable_fips_cloud.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/enable_fips_cloud.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/enable_fips_cloud.feature 2022-05-18 19:44:15.000000000 +0000 @@ -18,20 +18,40 @@ | xenial | Xenial | fips-updates | @series.xenial + @uses.config.machine_type.aws.generic @uses.config.machine_type.azure.generic - Scenario Outline: Enable FIPS services in an ubuntu Xenial Azure vm - Given a `xenial` machine with ubuntu-advantage-tools installed + Scenario Outline: FIPS unholds packages + Given a `` machine with ubuntu-advantage-tools installed When I attach `contract_token` with sudo - Then I verify that running `ua enable --assume-yes` `with sudo` exits `1` - And stdout matches regexp: + And I run `DEBIAN_FRONTEND=noninteractive apt-get install -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" -y openssh-client openssh-server strongswan` with sudo + And I run `apt-mark hold openssh-client openssh-server strongswan` with sudo + And I run `ua enable fips --assume-yes` with sudo + Then I verify that `openssh-server` is installed from apt source `` + And I verify that `openssh-client` is installed from apt source `` + And I verify that `strongswan` is installed from apt source `` + And I verify that `openssh-server-hmac` is installed from apt source `` + And I verify that `openssh-client-hmac` is installed from apt source `` + And I verify that `strongswan-hmac` is installed from apt source `` + When I run `ua disable fips --assume-yes` with sudo + And I run `apt-mark unhold openssh-client openssh-server strongswan` with sudo + Then I will see the following on stdout: """ - Ubuntu Xenial does not provide an Azure optimized FIPS kernel + openssh-client was already not hold. + openssh-server was already not hold. + strongswan was already not hold. """ + When I reboot the `` machine + Then I verify that `openssh-server` installed version matches regexp `fips` + And I verify that `openssh-client` installed version matches regexp `fips` + And I verify that `strongswan` installed version matches regexp `fips` + And I verify that `openssh-server-hmac` installed version matches regexp `fips` + And I verify that `openssh-client-hmac` installed version matches regexp `fips` + And I verify that `strongswan-hmac` installed version matches regexp `fips` + + Examples: ubuntu release + | release | fips-apt-source | + | xenial | https://esm.ubuntu.com/fips/ubuntu xenial/main | - Examples: fips - | fips_service | - | fips | - | fips-updates | @series.bionic @uses.config.machine_type.aws.generic @@ -102,6 +122,7 @@ | focal | https://esm.ubuntu.com/fips/ubuntu focal/main | @slow + @series.xenial @series.bionic @series.focal @uses.config.machine_type.azure.generic @@ -123,7 +144,7 @@ """ And I verify that running `apt update` `with sudo` exits `0` And I verify that running `grep Traceback /var/log/ubuntu-advantage.log` `with sudo` exits `1` - When I run `apt-cache policy ubuntu-azure-fips` as non-root + When I run `apt-cache policy ` as non-root Then stdout does not match regexp: """ .*Installed: \(none\) @@ -132,7 +153,7 @@ And I run `uname -r` as non-root Then stdout matches regexp: """ - azure-fips + """ When I run `cat /proc/sys/crypto/fips_enabled` with sudo Then I will see the following on stdout: @@ -144,7 +165,7 @@ """ Updating package lists """ - When I run `apt-cache policy ubuntu-azure-fips` as non-root + When I run `apt-cache policy ` as non-root Then stdout matches regexp: """ .*Installed: \(none\) @@ -157,11 +178,13 @@ """ Examples: ubuntu release - | release | fips-name | fips-service |fips-apt-source | - | bionic | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu bionic/main | - | bionic | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu bionic/main | - | focal | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu focal/main | - | focal | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu focal/main | + | release | fips-name | fips-service | fips-package | fips-kernel | fips-apt-source | + | xenial | FIPS | fips | ubuntu-fips | fips | https://esm.ubuntu.com/fips/ubuntu xenial/main | + | xenial | FIPS Updates | fips-updates | ubuntu-fips | fips | https://esm.ubuntu.com/fips/ubuntu xenial/main | + | bionic | FIPS | fips | ubuntu-azure-fips | azure-fips | https://esm.ubuntu.com/fips/ubuntu bionic/main | + | bionic | FIPS Updates | fips-updates | ubuntu-azure-fips | azure-fips | https://esm.ubuntu.com/fips/ubuntu bionic/main | + | focal | FIPS | fips | ubuntu-azure-fips | azure-fips | https://esm.ubuntu.com/fips/ubuntu focal/main | + | focal | FIPS Updates | fips-updates | ubuntu-azure-fips | azure-fips | https://esm.ubuntu.com/fips/ubuntu focal/main | @slow @series.xenial diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/enable_fips_pro.feature ubuntu-advantage-tools-27.9~16.04.1/features/enable_fips_pro.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/enable_fips_pro.feature 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/enable_fips_pro.feature 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,179 @@ +Feature: FIPS enablement in PRO cloud based machines + + @slow + @series.bionic + @series.focal + @uses.config.machine_type.aws.pro + Scenario Outline: Attached enable of FIPS in an ubuntu Azure PRO vm + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + """ + And I run `ua auto-attach` with sudo + And I run `ua status --wait` with sudo + Then stdout matches regexp: + """ + fips +yes +disabled +NIST-certified core packages + fips-updates +yes +disabled +NIST-certified core packages with priority security updates + """ + When I run `ua enable --assume-yes` with sudo + Then stdout matches regexp: + """ + Updating package lists + Installing packages + enabled + A reboot is required to complete install + """ + When I run `ua status --all` with sudo + Then stdout matches regexp: + """ + +yes enabled + """ + And I verify that running `apt update` `with sudo` exits `0` + And I verify that running `grep Traceback /var/log/ubuntu-advantage.log` `with sudo` exits `1` + When I run `apt-cache policy ubuntu-aws-fips` as non-root + Then stdout does not match regexp: + """ + .*Installed: \(none\) + """ + When I reboot the `` machine + And I run `uname -r` as non-root + Then stdout matches regexp: + """ + aws-fips + """ + When I run `cat /proc/sys/crypto/fips_enabled` with sudo + Then I will see the following on stdout: + """ + 1 + """ + + Examples: ubuntu release + | release | fips-name | fips-service |fips-apt-source | + | bionic | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu bionic/main | + | bionic | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu bionic/main | + | focal | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu focal/main | + | focal | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu focal/main | + + @slow + @series.bionic + @series.focal + @uses.config.machine_type.azure.pro + Scenario Outline: Attached enable of FIPS in an ubuntu Azure PRO vm + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + """ + And I run `ua auto-attach` with sudo + And I run `ua status --wait` with sudo + Then stdout matches regexp: + """ + fips +yes +disabled +NIST-certified core packages + fips-updates +yes +disabled +NIST-certified core packages with priority security updates + """ + When I run `ua enable --assume-yes` with sudo + Then stdout matches regexp: + """ + Updating package lists + Installing packages + enabled + A reboot is required to complete install + """ + When I run `ua status --all` with sudo + Then stdout matches regexp: + """ + +yes enabled + """ + And I verify that running `apt update` `with sudo` exits `0` + And I verify that running `grep Traceback /var/log/ubuntu-advantage.log` `with sudo` exits `1` + When I run `apt-cache policy ubuntu-azure-fips` as non-root + Then stdout does not match regexp: + """ + .*Installed: \(none\) + """ + When I reboot the `` machine + And I run `uname -r` as non-root + Then stdout matches regexp: + """ + azure-fips + """ + When I run `cat /proc/sys/crypto/fips_enabled` with sudo + Then I will see the following on stdout: + """ + 1 + """ + + Examples: ubuntu release + | release | fips-name | fips-service |fips-apt-source | + | bionic | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu bionic/main | + | bionic | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu bionic/main | + | focal | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu focal/main | + | focal | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu focal/main | + + + @slow + @series.bionic + @series.focal + @uses.config.machine_type.gcp.pro + Scenario Outline: Attached enable of FIPS in an ubuntu GCP PRO vm + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + """ + And I run `ua auto-attach` with sudo + And I run `ua status --wait` with sudo + Then stdout matches regexp: + """ + fips +yes +disabled +NIST-certified core packages + fips-updates +yes +disabled +NIST-certified core packages with priority security updates + """ + When I run `ua enable --assume-yes` with sudo + Then stdout matches regexp: + """ + Updating package lists + Installing packages + enabled + A reboot is required to complete install + """ + When I run `ua status --all` with sudo + Then stdout matches regexp: + """ + +yes enabled + """ + And I verify that running `apt update` `with sudo` exits `0` + And I verify that running `grep Traceback /var/log/ubuntu-advantage.log` `with sudo` exits `1` + When I run `apt-cache policy ubuntu-gcp-fips` as non-root + Then stdout does not match regexp: + """ + .*Installed: \(none\) + """ + When I reboot the `` machine + And I run `uname -r` as non-root + Then stdout matches regexp: + """ + gcp-fips + """ + When I run `cat /proc/sys/crypto/fips_enabled` with sudo + Then I will see the following on stdout: + """ + 1 + """ + + Examples: ubuntu release + | release | fips-name | fips-service |fips-apt-source | + | bionic | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu bionic/main | + | bionic | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu bionic/main | + | focal | FIPS | fips |https://esm.ubuntu.com/fips/ubuntu focal/main | + | focal | FIPS Updates | fips-updates |https://esm.ubuntu.com/fips/ubuntu focal/main | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/environment.py ubuntu-advantage-tools-27.9~16.04.1/features/environment.py --- ubuntu-advantage-tools-27.8~16.04.1/features/environment.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/environment.py 2022-05-18 19:44:15.000000000 +0000 @@ -75,6 +75,8 @@ source of truth for test configuration (rather than having environment variable handling throughout the test code). + :param cloud_credentials_path: + Optional path the pycloudlib file containing the cloud credentials :param contract_token: A valid contract token to use during attach scenarios :param contract_token_staging: @@ -120,14 +122,7 @@ "snapshot_strategy", ] str_options = [ - "aws_access_key_id", - "aws_secret_access_key", - "az_client_id", - "az_client_secret", - "az_tenant_id", - "az_subscription_id", - "gcp_credentials_path", - "gcp_project", + "cloud_credentials_path", "contract_token", "contract_token_staging", "contract_token_staging_expired", @@ -144,12 +139,6 @@ "sbuild_chroot", ] redact_options = [ - "aws_access_key_id", - "aws_secret_access_key", - "az_client_id", - "az_client_secret", - "az_tenant_id", - "az_subscription_id", "contract_token", "contract_token_staging", "contract_token_staging_expired", @@ -164,14 +153,7 @@ def __init__( self, *, - aws_access_key_id: str = None, - aws_secret_access_key: str = None, - az_client_id: str = None, - az_client_secret: str = None, - az_tenant_id: str = None, - az_subscription_id: str = None, - gcp_credentials_path: str = None, - gcp_project: str = None, + cloud_credentials_path: str = None, build_pr: bool = False, image_clean: bool = True, destroy_instances: bool = True, @@ -196,14 +178,7 @@ cmdline_tags: List = [] ) -> None: # First, store the values we've detected - self.aws_access_key_id = aws_access_key_id - self.aws_secret_access_key = aws_secret_access_key - self.az_client_id = az_client_id - self.az_client_secret = az_client_secret - self.az_tenant_id = az_tenant_id - self.az_subscription_id = az_subscription_id - self.gcp_credentials_path = gcp_credentials_path - self.gcp_project = gcp_project + self.cloud_credentials_path = cloud_credentials_path self.build_pr = build_pr self.cache_source = cache_source self.enable_proposed = enable_proposed @@ -239,12 +214,6 @@ print(" Reuse_image specified, it will not be deleted.") ignore_vars = () # type: Tuple[str, ...] - if "aws" not in self.machine_type: - ignore_vars += cloud.EC2.env_vars - if "azure" not in self.machine_type: - ignore_vars += cloud.Azure.env_vars - if "gcp" not in self.machine_type: - ignore_vars += cloud.GCP.env_vars if "pro" in self.machine_type: ignore_vars += ( "UACLIENT_BEHAVE_CONTRACT_TOKEN", @@ -281,39 +250,38 @@ timed_job_tag += "-" + random_suffix if "aws" in self.machine_type: + # For AWS, we need to specify on the pycloudlib config file that + # the AWS region must be us-east-2. The reason for that is because + # our image ids were captured using that region. self.cloud_manager = cloud.EC2( - aws_access_key_id, - aws_secret_access_key, - region=os.environ.get("AWS_DEFAULT_REGION", "us-east-2"), machine_type=self.machine_type, + cloud_credentials_path=self.cloud_credentials_path, tag=timed_job_tag, timestamp_suffix=False, ) elif "azure" in self.machine_type: self.cloud_manager = cloud.Azure( - az_client_id=az_client_id, - az_client_secret=az_client_secret, - az_tenant_id=az_tenant_id, - az_subscription_id=az_subscription_id, machine_type=self.machine_type, + cloud_credentials_path=self.cloud_credentials_path, tag=timed_job_tag, timestamp_suffix=False, ) elif "gcp" in self.machine_type: self.cloud_manager = cloud.GCP( machine_type=self.machine_type, + cloud_credentials_path=self.cloud_credentials_path, tag=timed_job_tag, timestamp_suffix=False, - gcp_credentials_path=self.gcp_credentials_path, - gcp_project=gcp_project, ) elif "lxd.vm" in self.machine_type: self.cloud_manager = cloud.LXDVirtualMachine( - machine_type=self.machine_type + machine_type=self.machine_type, + cloud_credentials_path=self.cloud_credentials_path, ) else: self.cloud_manager = cloud.LXDContainer( - machine_type=self.machine_type + machine_type=self.machine_type, + cloud_credentials_path=self.cloud_credentials_path, ) self.cloud_api = self.cloud_manager.api @@ -432,22 +400,8 @@ curr_machine_type = ".".join(parts[idx + 1 :]) machine_types.append(curr_machine_type) if curr_machine_type == machine_type: - if machine_type.startswith("lxd"): - return "" - - cloud_manager = context.config.cloud_manager - if cloud_manager and cloud_manager.missing_env_vars(): - return "".join( - ( - "Skipped: {} machine_type requires:\n".format( - machine_type - ), - *cloud_manager.format_missing_env_vars( - cloud_manager.missing_env_vars() - ), - ) - ) return "" + break if val is None: return "Skipped: tag value was None: {}".format(tag) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/fix.feature ubuntu-advantage-tools-27.9~16.04.1/features/fix.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/fix.feature 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/fix.feature 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,337 @@ +Feature: Ua fix command behaviour + + @series.all + @uses.config.machine_type.lxd.container + Scenario Outline: Useful SSL failure message when there aren't any ca-certs + Given a `` machine with ubuntu-advantage-tools installed + When I run `apt remove ca-certificates -y` with sudo + When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` + Then stderr matches regexp: + """ + Failed to access URL: https://.* + Cannot verify certificate of server + Please install "ca-certificates" and try again. + """ + When I run `apt install ca-certificates -y` with sudo + When I run `mv /etc/ssl/certs /etc/ssl/wronglocation` with sudo + When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` + Then stderr matches regexp: + """ + Failed to access URL: https://.* + Cannot verify certificate of server + Please check your openssl configuration. + """ + Examples: ubuntu release + | release | + | xenial | + | bionic | + | focal | + | impish | + | jammy | + + @series.focal + @uses.config.machine_type.lxd.container + Scenario Outline: Fix command on an unattached machine + Given a `` machine with ubuntu-advantage-tools installed + When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: CVE-1800-123456 not found. + """ + When I verify that running `ua fix USN-12345-12` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: USN-12345-12 not found. + """ + When I verify that running `ua fix CVE-12345678-12` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: issue "CVE-12345678-12" is not recognized. + Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" + """ + When I verify that running `ua fix USN-12345678-12` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: issue "USN-12345678-12" is not recognized. + Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" + """ + When I run `apt install -y libawl-php=0.60-1 --allow-downgrades` with sudo + And I run `ua fix USN-4539-1` with sudo + Then stdout matches regexp: + """ + USN-4539-1: AWL vulnerability + Found CVEs: + https://ubuntu.com/security/CVE-2020-11728 + 1 affected source package is installed: awl + \(1/1\) awl: + A fix is available in Ubuntu standard updates. + .*\{ apt update && apt install --only-upgrade -y libawl-php \}.* + .*✔.* USN-4539-1 is resolved. + """ + When I run `ua fix CVE-2020-28196` as non-root + Then stdout matches regexp: + """ + CVE-2020-28196: Kerberos vulnerability + https://ubuntu.com/security/CVE-2020-28196 + 1 affected source package is installed: krb5 + \(1/1\) krb5: + A fix is available in Ubuntu standard updates. + The update is already installed. + .*✔.* CVE-2020-28196 is resolved. + """ + When I run `ua fix CVE-2022-24959` as non-root + Then stdout matches regexp: + """ + CVE-2022-24959: Linux kernel vulnerabilities + https://ubuntu.com/security/CVE-2022-24959 + No affected source packages are installed. + .*✔.* CVE-2022-24959 does not affect your system. + """ + + Examples: ubuntu release details + | release | + | focal | + + @series.xenial + @uses.config.contract_token + @uses.config.machine_type.lxd.container + Scenario Outline: Fix command on an unattached machine + Given a `` machine with ubuntu-advantage-tools installed + When I run `apt install -y libawl-php` with sudo + And I reboot the `` machine + And I verify that running `ua fix USN-4539-1` `as non-root` exits `1` + Then stdout matches regexp: + """ + USN-4539-1: AWL vulnerability + Found CVEs: + https://ubuntu.com/security/CVE-2020-11728 + 1 affected source package is installed: awl + \(1/1\) awl: + Sorry, no fix is available. + 1 package is still affected: awl + .*✘.* USN-4539-1 is not resolved. + """ + When I run `ua fix CVE-2020-15180` as non-root + Then stdout matches regexp: + """ + CVE-2020-15180: MariaDB vulnerabilities + https://ubuntu.com/security/CVE-2020-15180 + No affected source packages are installed. + .*✔.* CVE-2020-15180 does not affect your system. + """ + When I run `ua fix CVE-2020-28196` as non-root + Then stdout matches regexp: + """ + CVE-2020-28196: Kerberos vulnerability + https://ubuntu.com/security/CVE-2020-28196 + 1 affected source package is installed: krb5 + \(1/1\) krb5: + A fix is available in Ubuntu standard updates. + The update is already installed. + .*✔.* CVE-2020-28196 is resolved. + """ + When I run `DEBIAN_FRONTEND=noninteractive apt-get install -y expat=2.1.0-7 swish-e matanza ghostscript` with sudo + And I verify that running `ua fix CVE-2017-9233` `with sudo` exits `1` + Then stdout matches regexp: + """ + CVE-2017-9233: Expat vulnerability + https://ubuntu.com/security/CVE-2017-9233 + 3 affected source packages are installed: expat, matanza, swish-e + \(1/3, 2/3\) matanza, swish-e: + Sorry, no fix is available. + \(3/3\) expat: + A fix is available in Ubuntu standard updates. + .*\{ apt update && apt install --only-upgrade -y expat \}.* + 2 packages are still affected: matanza, swish-e + .*✘.* CVE-2017-9233 is not resolved. + """ + When I fix `USN-5079-2` by attaching to a subscription with `contract_token_staging_expired` + Then stdout matches regexp + """ + USN-5079-2: curl vulnerabilities + Found CVEs: + https://ubuntu.com/security/CVE-2021-22946 + https://ubuntu.com/security/CVE-2021-22947 + 1 affected source package is installed: curl + \(1/1\) curl: + A fix is available in UA Infra. + The update is not installed because this system is not attached to a + subscription. + + Choose: \[S\]ubscribe at ubuntu.com \[A\]ttach existing token \[C\]ancel + > Enter your token \(from https://ubuntu.com/advantage\) to attach this system: + > .*\{ ua attach .*\}.* + Attach denied: + Contract ".*" expired on .* + Visit https://ubuntu.com/advantage to manage contract tokens. + 1 package is still affected: curl + .*✘.* USN-5079-2 is not resolved. + """ + When I fix `USN-5079-2` by attaching to a subscription with `contract_token` + Then stdout matches regexp: + """ + USN-5079-2: curl vulnerabilities + Found CVEs: + https://ubuntu.com/security/CVE-2021-22946 + https://ubuntu.com/security/CVE-2021-22947 + 1 affected source package is installed: curl + \(1/1\) curl: + A fix is available in UA Infra. + The update is not installed because this system is not attached to a + subscription. + + Choose: \[S\]ubscribe at ubuntu.com \[A\]ttach existing token \[C\]ancel + > Enter your token \(from https://ubuntu.com/advantage\) to attach this system: + > .*\{ ua attach .*\}.* + Updating package lists + UA Apps: ESM enabled + Updating package lists + UA Infra: ESM enabled + """ + And stdout matches regexp: + """ + .*\{ apt update && apt install --only-upgrade -y curl libcurl3-gnutls \}.* + .*✔.* USN-5079-2 is resolved. + """ + When I verify that running `ua fix USN-5051-2` `with sudo` exits `2` + Then stdout matches regexp: + """ + USN-5051-2: OpenSSL vulnerability + Found CVEs: + https://ubuntu.com/security/CVE-2021-3712 + 1 affected source package is installed: openssl + \(1/1\) openssl: + A fix is available in UA Infra. + .*\{ apt update && apt install --only-upgrade -y libssl1.0.0 openssl \}.* + A reboot is required to complete fix operation. + .*✘.* USN-5051-2 is not resolved. + """ + When I run `ua disable esm-infra` with sudo + And I run `apt-get install gzip -y` with sudo + And I run `ua fix USN-5378-4` `with sudo` and stdin `E` + Then stdout matches regexp: + """ + USN-5378-4: Gzip vulnerability + Found CVEs: + https://ubuntu.com/security/CVE-2022-1271 + 2 affected source packages are installed: gzip, xz-utils + \(1/2, 2/2\) gzip, xz-utils: + A fix is available in UA Infra. + The update is not installed because this system does not have + esm-infra enabled. + + Choose: \[E\]nable esm-infra \[C\]ancel + > .*\{ ua enable esm-infra \}.* + One moment, checking your subscription first + Updating package lists + UA Infra: ESM enabled + .*\{ apt update && apt install --only-upgrade -y gzip liblzma5 xz-utils \}.* + .*✔.* USN-5378-4 is resolved. + """ + + Examples: ubuntu release details + | release | + | xenial | + + @series.bionic + @uses.config.machine_type.lxd.container + Scenario: Fix command on an unattached machine + Given a `bionic` machine with ubuntu-advantage-tools installed + When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: CVE-1800-123456 not found. + """ + When I verify that running `ua fix USN-12345-12` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: USN-12345-12 not found. + """ + When I verify that running `ua fix CVE-12345678-12` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: issue "CVE-12345678-12" is not recognized. + Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" + """ + When I verify that running `ua fix USN-12345678-12` `as non-root` exits `1` + Then I will see the following on stderr: + """ + Error: issue "USN-12345678-12" is not recognized. + Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" + """ + When I run `apt install -y libawl-php` with sudo + And I verify that running `ua fix USN-4539-1` `as non-root` exits `1` + Then stdout matches regexp: + """ + USN-4539-1: AWL vulnerability + Found CVEs: + https://ubuntu.com/security/CVE-2020-11728 + 1 affected source package is installed: awl + \(1/1\) awl: + Ubuntu security engineers are investigating this issue. + 1 package is still affected: awl + .*✘.* USN-4539-1 is not resolved. + """ + When I run `ua fix CVE-2020-28196` as non-root + Then stdout matches regexp: + """ + CVE-2020-28196: Kerberos vulnerability + https://ubuntu.com/security/CVE-2020-28196 + 1 affected source package is installed: krb5 + \(1/1\) krb5: + A fix is available in Ubuntu standard updates. + The update is already installed. + .*✔.* CVE-2020-28196 is resolved. + """ + When I run `apt-get install xterm=330-1ubuntu2 -y` with sudo + And I verify that running `ua fix CVE-2021-27135` `as non-root` exits `1` + Then stdout matches regexp: + """ + CVE-2021-27135: xterm vulnerability + https://ubuntu.com/security/CVE-2021-27135 + 1 affected source package is installed: xterm + \(1/1\) xterm: + A fix is available in Ubuntu standard updates. + Package fixes cannot be installed. + To install them, run this command as root \(try using sudo\) + 1 package is still affected: xterm + .*✘.* CVE-2021-27135 is not resolved. + """ + When I run `ua fix CVE-2021-27135` with sudo + Then stdout matches regexp: + """ + CVE-2021-27135: xterm vulnerability + https://ubuntu.com/security/CVE-2021-27135 + 1 affected source package is installed: xterm + \(1/1\) xterm: + A fix is available in Ubuntu standard updates. + .*\{ apt update && apt install --only-upgrade -y xterm \}.* + .*✔.* CVE-2021-27135 is resolved. + """ + When I run `ua fix CVE-2021-27135` with sudo + Then stdout matches regexp: + """ + CVE-2021-27135: xterm vulnerability + https://ubuntu.com/security/CVE-2021-27135 + 1 affected source package is installed: xterm + \(1/1\) xterm: + A fix is available in Ubuntu standard updates. + The update is already installed. + .*✔.* CVE-2021-27135 is resolved. + """ + When I run `apt-get install libbz2-1.0=1.0.6-8.1 -y --allow-downgrades` with sudo + And I run `apt-get install bzip2=1.0.6-8.1 -y` with sudo + And I run `ua fix USN-4038-3` with sudo + Then stdout matches regexp: + """ + USN-4038-3: bzip2 regression + Found Launchpad bugs: + https://launchpad.net/bugs/1834494 + 1 affected source package is installed: bzip2 + \(1/1\) bzip2: + A fix is available in Ubuntu standard updates. + .*\{ apt update && apt install --only-upgrade -y bzip2 libbz2-1.0 \}.* + .*✔.* USN-4038-3 is resolved. + """ + + diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/gcp-ids.yaml ubuntu-advantage-tools-27.9~16.04.1/features/gcp-ids.yaml --- ubuntu-advantage-tools-27.8~16.04.1/features/gcp-ids.yaml 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/gcp-ids.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -1,3 +1,5 @@ -bionic: projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-1804-bionic-v20220117 -focal: projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-2004-focal-v20220110 +bionic: projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-1804-bionic-v20220411 +bionic-fips: projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-fips-1804-bionic-v20220411a +focal: projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-2004-focal-v20220411 +focal-fips: projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-fips-2004-focal-v20220411b xenial: projects/ubuntu-os-pro-cloud/global/images/ubuntu-pro-1604-xenial-v20211213 diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/install_uninstall.feature ubuntu-advantage-tools-27.9~16.04.1/features/install_uninstall.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/install_uninstall.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/install_uninstall.feature 2022-05-18 19:44:15.000000000 +0000 @@ -42,9 +42,10 @@ Examples: ubuntu release | release | + | xenial | | bionic | | focal | - | xenial | + | jammy | @slow @series.lts @@ -86,3 +87,4 @@ | xenial | | bionic | | focal | + | jammy | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/license_check.feature ubuntu-advantage-tools-27.9~16.04.1/features/license_check.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/license_check.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/license_check.feature 1970-01-01 00:00:00.000000000 +0000 @@ -1,123 +0,0 @@ -Feature: License check timer only runs in environments where necessary - - @series.lts - @uses.config.contract_token - @uses.config.machine_type.gcp.generic - Scenario Outline: license_check job should run periodically on gcp generic lts - Given a `` machine with ubuntu-advantage-tools installed - # verify its enabled - Then I verify the `ua-license-check` systemd timer is scheduled to run within `10` minutes - # run it and verify that it didn't disable itself - When I run `systemctl start ua-license-check.service` with sudo - When I wait `5` seconds - Then I verify the `ua-license-check` systemd timer is scheduled to run within `10` minutes - # verify attach disables it - When I wait `5` seconds - When I attach `contract_token` with sudo - Then I verify the `ua-license-check` systemd timer is disabled - # verify detach enables it - When I run `ua detach --assume-yes` with sudo - And I wait `5` seconds - Then I verify the `ua-license-check` systemd timer is scheduled to run within `10` minutes - # verify stopping and deleting marker file and stopping disables it - # We need to call stop both before and after rm-ing the marker file - # because at least one version of systemd requires it before (245.4-4ubuntu3.11 - # on focal gcp), and every other tested version of systemd requires it after - # But this is only necessary when manually running the steps or in this test. - # `disable_license_checks_if_applicable` works fine with only calling stop after, - # as evidenced by the "verify attach disables it" steps above passing on focal gcp. - When I run `systemctl stop ua-license-check.timer` with sudo - When I run `rm /var/lib/ubuntu-advantage/marker-license-check` with sudo - When I run `systemctl stop ua-license-check.timer` with sudo - Then I verify the `ua-license-check` systemd timer is disabled - # verify creating marker file enables it - When I run `touch /var/lib/ubuntu-advantage/marker-license-check` with sudo - Then I verify the `ua-license-check` systemd timer is scheduled to run within `10` minutes - Examples: version - | release | - | xenial | - | bionic | - | focal | - | jammy | - - @series.impish - @uses.config.contract_token - @uses.config.machine_type.gcp.generic - Scenario Outline: license_check is disabled gcp generic non lts - Given a `` machine with ubuntu-advantage-tools installed - Then I verify the `ua-license-check` systemd timer is disabled - # verify creating marker file enables it, but it disables itself - When I run `touch /var/lib/ubuntu-advantage/marker-license-check` with sudo - Then I verify the `ua-license-check` systemd timer either ran within the past `5` seconds OR is scheduled to run within `10` minutes - When I run `systemctl start ua-license-check.service` with sudo - When I wait `5` seconds - Then I verify the `ua-license-check` systemd timer is disabled - # verify attach and detach does not enable it - When I wait `5` seconds - When I attach `contract_token` with sudo - When I run `ua detach --assume-yes` with sudo - When I wait `5` seconds - Then I verify the `ua-license-check` systemd timer is disabled - Examples: version - | release | - | impish | - - @series.all - @uses.config.contract_token - @uses.config.machine_type.lxd.container - @uses.config.machine_type.lxd.vm - @uses.config.machine_type.aws.generic - @uses.config.machine_type.azure.generic - Scenario Outline: license_check is disabled everywhere but gcp generic - Given a `` machine with ubuntu-advantage-tools installed - Then I verify the `ua-license-check` systemd timer is disabled - When I reboot the `` machine - Then I verify the `ua-license-check` systemd timer is disabled - # verify creating marker file enables it, but it disables itself - When I run `touch /var/lib/ubuntu-advantage/marker-license-check` with sudo - Then I verify the `ua-license-check` systemd timer either ran within the past `5` seconds OR is scheduled to run within `10` minutes - When I run `systemctl start ua-license-check.service` with sudo - When I wait `5` seconds - And I verify that running `grep "Disabling gcp_auto_attach job" /var/log/ubuntu-advantage-license-check.log` `with sudo` exits `0` - And I verify that running `grep "Disabling gcp_auto_attach job" /var/log/ubuntu-advantage.log` `with sudo` exits `1` - Then I verify the `ua-license-check` systemd timer is disabled - # verify attach and detach does not enable it - When I wait `5` seconds - When I attach `contract_token` with sudo - When I run `ua detach --assume-yes` with sudo - When I wait `5` seconds - Then I verify the `ua-license-check` systemd timer is disabled - Examples: version - | release | - | xenial | - | bionic | - | focal | - | impish | - | jammy | - - @series.lts - @uses.config.machine_type.aws.pro - @uses.config.machine_type.azure.pro - @uses.config.machine_type.gcp.pro - Scenario Outline: license_check is disabled everywhere but gcp generic - Given a `` machine with ubuntu-advantage-tools installed - When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: - """ - contract_url: 'https://contracts.canonical.com' - data_dir: /var/lib/ubuntu-advantage - log_level: debug - log_file: /var/log/ubuntu-advantage.log - """ - When I run `ua auto-attach` with sudo - Then I verify the `ua-license-check` systemd timer is disabled - # verify creating marker file enables it, but it disables itself - When I run `touch /var/lib/ubuntu-advantage/marker-license-check` with sudo - Then I verify the `ua-license-check` systemd timer either ran within the past `5` seconds OR is scheduled to run within `10` minutes - When I run `systemctl start ua-license-check.service` with sudo - When I wait `5` seconds - Then I verify the `ua-license-check` systemd timer is disabled - Examples: version - | release | - | xenial | - | bionic | - | focal | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/proxy_config.feature ubuntu-advantage-tools-27.9~16.04.1/features/proxy_config.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/proxy_config.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/proxy_config.feature 2022-05-18 19:44:15.000000000 +0000 @@ -4,7 +4,7 @@ @slow @series.lts @uses.config.machine_type.lxd.container - Scenario Outline: Attach command when proxy is configured + Scenario Outline: Attach command when proxy is configured for uaclient Given a `` machine with ubuntu-advantage-tools installed When I launch a `focal` `proxy` machine And I run `apt install squid -y` `with sudo` on the `proxy` machine @@ -13,23 +13,29 @@ dns_v4_first on\nacl all src 0.0.0.0\/0\nhttp_access allow all """ And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine - When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + And I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I run `ua config set https_proxy=http://:3128` with sudo + Then stdout matches regexp: """ - contract_url: 'https://contracts.canonical.com' - data_dir: /var/lib/ubuntu-advantage - log_level: debug - log_file: /var/log/ubuntu-advantage.log - ua_config: - http_proxy: http://:3128 - https_proxy: http://:3128 + Setting snap proxy """ - And I verify `/var/log/squid/access.log` is empty on `proxy` machine - # We need this for the route command - And I run `apt-get install net-tools` with sudo - # We will guarantee that the machine will only use the proxy when - # running the ua commands - And I run `route del default` with sudo - And I attach `contract_token` with sudo and options `--no-auto-enable` + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT api.snapcraft.io.* + """ + When I run `ua config set http_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Setting snap proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://api.snapcraft.io.* + """ + When I attach `contract_token` with sudo and options `--no-auto-enable` And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine Then stdout matches regexp: """ @@ -42,22 +48,26 @@ esm-infra +yes +disabled +UA Infra: Extended Security Maintenance \(ESM\) """ When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine - When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I run `ua config set ua_apt_http_proxy=http://:3128` with sudo + Then stdout matches regexp: """ - contract_url: 'https://contracts.canonical.com' - data_dir: /var/lib/ubuntu-advantage - log_level: debug - log_file: /var/log/ubuntu-advantage.log - ua_config: - apt_http_proxy: http://:3128 - apt_https_proxy: http://:3128 + Setting UA-scoped APT proxy """ - And I verify `/var/log/squid/access.log` is empty on `proxy` machine - Then I verify that no files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` - When I run `ua refresh config` with sudo + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://archive.ubuntu.com.* + """ + When I run `ua config set ua_apt_https_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Setting UA-scoped APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine Then stdout matches regexp: """ - Setting APT proxy + .*CONNECT esm.ubuntu.com.* """ Then I verify that files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo @@ -67,52 +77,62 @@ \* Autogenerated by ubuntu-advantage-tools \* Do not edit this file directly \* - \* To change what ubuntu-advantage-tools sets, run one of the following: - \* Substitute "apt_https_proxy" for "apt_http_proxy" as necessary. - \* sudo ua config set apt_http_proxy= - \* sudo ua config unset apt_http_proxy + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. \*/ - Acquire::http::Proxy ".*"; - Acquire::https::Proxy ".*"; + Acquire::http::Proxy::esm.ubuntu.com \".*\"; + Acquire::https::Proxy::esm.ubuntu.com \".*\"; """ - When I run `apt update` with sudo + When I run `apt-get update` with sudo And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine Then stdout matches regexp: """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout does not match regexp: + """ .*GET.*ubuntu.com/ubuntu/dists.* """ - When I run `ua config unset apt_http_proxy` with sudo - And I run `ua config unset apt_https_proxy` with sudo + Then stdout does not match regexp: + """ + .*GET.*archive.ubuntu.com.* + """ + Then stdout does not match regexp: + """ + .*GET.*security.ubuntu.com.* + """ + When I run `ua config unset ua_apt_http_proxy` with sudo + And I run `ua config unset ua_apt_https_proxy` with sudo And I run `ua refresh config` with sudo Then I verify that no files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: """ ua_config: - apt_http_proxy: "invalidurl" - apt_https_proxy: "invalidurls" + ua_apt_http_proxy: "invalidurl" + ua_apt_https_proxy: "invalidurls" """ And I verify that running `ua refresh config` `with sudo` exits `1` Then stderr matches regexp: """ - "invalidurl" is not a valid url. Not setting as proxy. + \"invalidurl\" is not a valid url. Not setting as proxy. """ When I verify that running `ua config set http_proxy=http://host:port` `with sudo` exits `1` Then stderr matches regexp: """ - "http://host:port" is not a valid url. Not setting as proxy - """ - When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: - """ - ua_config: - apt_https_proxy: "https://localhost:12345" + \"http://host:port\" is not a valid url. Not setting as proxy """ - And I verify that running `ua refresh config` `with sudo` exits `1` + And I verify that running `ua config set ua_apt_https_proxy=https://localhost:12345` `with sudo` exits `1` Then stderr matches regexp: """ - "https://localhost:12345" is not working. Not setting as proxy. + \"https://localhost:12345\" is not working. Not setting as proxy. """ - When I run `ua config set apt_http_proxy=http://:3128` with sudo - And I run `ua config set apt_https_proxy=http://:3128` with sudo + When I run `ua config set ua_apt_http_proxy=http://:3128` with sudo + And I run `ua config set ua_apt_https_proxy=http://:3128` with sudo When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo Then stdout matches regexp: """ @@ -120,13 +140,16 @@ \* Autogenerated by ubuntu-advantage-tools \* Do not edit this file directly \* - \* To change what ubuntu-advantage-tools sets, run one of the following: - \* Substitute "apt_https_proxy" for "apt_http_proxy" as necessary. - \* sudo ua config set apt_http_proxy= - \* sudo ua config unset apt_http_proxy + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. \*/ - Acquire::http::Proxy ".*"; - Acquire::https::Proxy ".*"; + Acquire::http::Proxy::esm.ubuntu.com \".*\"; + Acquire::https::Proxy::esm.ubuntu.com \".*\"; """ Examples: ubuntu release @@ -134,6 +157,7 @@ | xenial | | bionic | | focal | + | jammy | @slow @series.xenial @@ -199,7 +223,7 @@ """ No proxy set in config; however, proxy is configured for: snap, livepatch. See https://discourse.ubuntu.com/t/ubuntu-advantage-client/21788 for more information on ua proxy configuration. - + Successfully processed your ua configuration. """ When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: @@ -211,7 +235,7 @@ And I verify that running `ua refresh config` `with sudo` exits `1` Then stderr matches regexp: """ - "invalidurl" is not a valid url. Not setting as proxy. + \"invalidurl\" is not a valid url. Not setting as proxy. """ When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: """ @@ -221,9 +245,9 @@ And I verify that running `ua refresh config` `with sudo` exits `1` Then stderr matches regexp: """ - "https://localhost:12345" is not working. Not setting as proxy. + \"https://localhost:12345\" is not working. Not setting as proxy. """ - + Examples: ubuntu release | release | | xenial | @@ -232,7 +256,7 @@ @slow @series.lts @uses.config.machine_type.lxd.container - Scenario Outline: Attach command when authenticated proxy is configured + Scenario Outline: Attach command when authenticated proxy is configured for uaclient Given a `` machine with ubuntu-advantage-tools installed When I launch a `focal` `proxy` machine And I run `apt install squid apache2-utils -y` `with sudo` on the `proxy` machine @@ -242,58 +266,87 @@ dns_v4_first on\nauth_param basic program \/usr\/lib\/squid\/basic_ncsa_auth \/etc\/squid\/passwordfile\nacl topsecret proxy_auth REQUIRED\nhttp_access allow topsecret """ And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine - When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + And I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + When I run `ua config set https_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: """ - contract_url: 'https://contracts.canonical.com' - data_dir: /var/lib/ubuntu-advantage - log_level: debug - log_file: /var/log/ubuntu-advantage.log - ua_config: - http_proxy: http://someuser:somepassword@:3128 - https_proxy: http://someuser:somepassword@:3128 + Setting snap proxy """ - And I verify `/var/log/squid/access.log` is empty on `proxy` machine - And I attach `contract_token` with sudo + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT api.snapcraft.io.* + """ + When I run `ua config set http_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: + """ + Setting snap proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://api.snapcraft.io.* + """ + When I attach `contract_token` with sudo And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine Then stdout matches regexp: """ .*CONNECT contracts.canonical.com.* """ When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine - When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + When I run `ua config set ua_apt_http_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: """ - contract_url: 'https://contracts.canonical.com' - data_dir: /var/lib/ubuntu-advantage - log_level: debug - log_file: /var/log/ubuntu-advantage.log - ua_config: - apt_http_proxy: http://someuser:somepassword@:3128 - apt_https_proxy: http://someuser:somepassword@:3128 + Setting UA-scoped APT proxy """ - And I verify `/var/log/squid/access.log` is empty on `proxy` machine - And I run `ua refresh config` with sudo - And I run `apt update` with sudo + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://archive.ubuntu.com.* + """ + When I run `ua config set ua_apt_https_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: + """ + Setting UA-scoped APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT esm.ubuntu.com.* + """ + When I run `ua refresh config` with sudo + And I run `apt-get update` with sudo And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine Then stdout matches regexp: """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout does not match regexp: + """ .*GET.*ubuntu.com/ubuntu/dists.* """ - When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + Then stdout does not match regexp: """ - ua_config: - apt_https_proxy: http://wronguser:wrongpassword@:3128 + .*GET.*archive.ubuntu.com.* """ - And I verify that running `ua refresh config` `with sudo` exits `1` + Then stdout does not match regexp: + """ + .*GET.*security.ubuntu.com.* + """ + And I verify that running `ua config set ua_apt_https_proxy=http://wronguser:wrongpassword@:3128` `with sudo` exits `1` Then stderr matches regexp: """ - "http://wronguser:wrongpassword@.*:3128" is not working. Not setting as proxy. + \"http://wronguser:wrongpassword@.*:3128\" is not working. Not setting as proxy. """ - + Examples: ubuntu release | release | | xenial | | bionic | | focal | + | jammy | @slow @series.xenial @@ -309,14 +362,11 @@ dns_v4_first on\nauth_param basic program \/usr\/lib\/squid\/basic_ncsa_auth \/etc\/squid\/passwordfile\nacl topsecret proxy_auth REQUIRED\nhttp_access allow topsecret """ And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine + And I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine And I run `ua config set http_proxy=http://someuser:somepassword@:3128` with sudo And I run `ua config set https_proxy=http://someuser:somepassword@:3128` with sudo - And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine - Then stdout matches regexp: - """ - .*CONNECT api.snapcraft.io:443.* - """ - When I attach `contract_token` with sudo + And I attach `contract_token` with sudo Then stdout matches regexp: """ Setting snap proxy @@ -343,8 +393,789 @@ """ .*CONNECT livepatch.canonical.com:443.* """ + + Examples: ubuntu release + | release | + | xenial | + | bionic | + + @slow + @series.lts + @uses.config.machine_type.lxd.container + Scenario Outline: Attach command when proxy is configured manually via conf file for uaclient + Given a `` machine with ubuntu-advantage-tools installed + When I launch a `focal` `proxy` machine + And I run `apt install squid -y` `with sudo` on the `proxy` machine + And I add this text on `/etc/squid/squid.conf` on `proxy` above `http_access deny all`: + """ + dns_v4_first on\nacl all src 0.0.0.0\/0\nhttp_access allow all + """ + And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + ua_config: + http_proxy: http://:3128 + https_proxy: http://:3128 + """ + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + # We need this for the route command + And I attach `contract_token` with sudo and options `--no-auto-enable` + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT contracts.canonical.com.* + """ + When I run `ua status` with sudo + # Just to verify that the machine is attached + Then stdout matches regexp: + """ + esm-infra +yes +disabled +UA Infra: Extended Security Maintenance \(ESM\) + """ + When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + ua_config: + ua_apt_http_proxy: http://:3128 + ua_apt_https_proxy: http://:3128 + """ + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + Then I verify that no files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I run `ua refresh config` with sudo + Then stdout matches regexp: + """ + Setting UA-scoped APT proxy + """ + Then I verify that files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy::esm.ubuntu.com \".*\"; + Acquire::https::Proxy::esm.ubuntu.com \".*\"; + """ + When I run `apt-get update` with sudo + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout does not match regexp: + """ + .*GET.*ubuntu.com/ubuntu/dists.* + """ + Then stdout does not match regexp: + """ + .*GET.*archive.ubuntu.com.* + """ + Then stdout does not match regexp: + """ + .*GET.*security.ubuntu.com.* + """ + When I run `ua config unset ua_apt_http_proxy` with sudo + And I run `ua config unset ua_apt_https_proxy` with sudo + And I run `ua refresh config` with sudo + Then I verify that no files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + ua_apt_http_proxy: "invalidurl" + ua_apt_https_proxy: "invalidurls" + """ + And I verify that running `ua refresh config` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"invalidurl\" is not a valid url. Not setting as proxy. + """ + When I verify that running `ua config set http_proxy=http://host:port` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"http://host:port\" is not a valid url. Not setting as proxy + """ + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + ua_apt_https_proxy: "https://localhost:12345" + """ + And I verify that running `ua refresh config` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"https://localhost:12345\" is not working. Not setting as proxy. + """ + When I run `ua config set ua_apt_http_proxy=http://:3128` with sudo + And I run `ua config set ua_apt_https_proxy=http://:3128` with sudo + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy::esm.ubuntu.com \".*\"; + Acquire::https::Proxy::esm.ubuntu.com \".*\"; + """ Examples: ubuntu release | release | | xenial | | bionic | + | focal | + | jammy | + + @slow + @series.lts + @uses.config.machine_type.lxd.container + Scenario Outline: Attach command when authenticated proxy is configured manually for uaclient + Given a `` machine with ubuntu-advantage-tools installed + When I launch a `focal` `proxy` machine + And I run `apt install squid apache2-utils -y` `with sudo` on the `proxy` machine + And I run `htpasswd -bc /etc/squid/passwordfile someuser somepassword` `with sudo` on the `proxy` machine + And I add this text on `/etc/squid/squid.conf` on `proxy` above `http_access deny all`: + """ + dns_v4_first on\nauth_param basic program \/usr\/lib\/squid\/basic_ncsa_auth \/etc\/squid\/passwordfile\nacl topsecret proxy_auth REQUIRED\nhttp_access allow topsecret + """ + And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + ua_config: + http_proxy: http://someuser:somepassword@:3128 + https_proxy: http://someuser:somepassword@:3128 + """ + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I attach `contract_token` with sudo + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT contracts.canonical.com.* + """ + When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + ua_config: + ua_apt_http_proxy: http://someuser:somepassword@:3128 + ua_apt_https_proxy: http://someuser:somepassword@:3128 + """ + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I run `ua refresh config` with sudo + And I run `apt-get update` with sudo + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout does not match regexp: + """ + .*GET.*ubuntu.com/ubuntu/dists.* + """ + Then stdout does not match regexp: + """ + .*GET.*archive.ubuntu.com.* + """ + Then stdout does not match regexp: + """ + .*GET.*security.ubuntu.com.* + """ + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + ua_apt_https_proxy: http://wronguser:wrongpassword@:3128 + """ + And I verify that running `ua refresh config` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"http://wronguser:wrongpassword@.*:3128\" is not working. Not setting as proxy. + """ + + Examples: ubuntu release + | release | + | xenial | + | bionic | + | focal | + | jammy | + + @slow + @series.lts + @uses.config.machine_type.lxd.container + Scenario Outline: Attach command when proxy is configured globally + Given a `` machine with ubuntu-advantage-tools installed + When I launch a `focal` `proxy` machine + And I run `apt install squid -y` `with sudo` on the `proxy` machine + And I add this text on `/etc/squid/squid.conf` on `proxy` above `http_access deny all`: + """ + dns_v4_first on\nacl all src 0.0.0.0\/0\nhttp_access allow all + """ + And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine + And I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I run `ua config set https_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Setting snap proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT api.snapcraft.io.* + """ + When I run `ua config set http_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Setting snap proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://api.snapcraft.io.* + """ + # We need this for the route command + When I run `apt-get install net-tools` with sudo + # We will guarantee that the machine will only use the proxy when + # running the ua commands + And I run `route del default` with sudo + And I attach `contract_token` with sudo and options `--no-auto-enable` + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT contracts.canonical.com.* + """ + When I run `ua status` with sudo + # Just to verify that the machine is attached + Then stdout matches regexp: + """ + esm-infra +yes +disabled +UA Infra: Extended Security Maintenance \(ESM\) + """ + When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I run `ua config set global_apt_http_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Setting global APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://archive.ubuntu.com.* + """ + When I run `ua config set global_apt_https_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Setting global APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT esm.ubuntu.com.* + """ + # TODO No longer empty, needs researching + Then I verify that files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy \".*\"; + Acquire::https::Proxy \".*\"; + """ + When I run `apt-get update` with sudo + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout matches regexp: + """ + .*GET.*ubuntu.com/ubuntu/dists.* + """ + Then stdout matches regexp: + """ + .*GET.*archive.ubuntu.com.* + """ + Then stdout matches regexp: + """ + .*GET.*security.ubuntu.com.* + """ + When I run `ua config unset global_apt_http_proxy` with sudo + And I run `ua config unset global_apt_https_proxy` with sudo + And I run `ua refresh config` with sudo + Then I verify that no files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + global_apt_http_proxy: "invalidurl" + global_https_proxy: "invalidurls" + """ + And I verify that running `ua refresh config` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"invalidurl\" is not a valid url. Not setting as proxy. + """ + When I verify that running `ua config set http_proxy=http://host:port` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"http://host:port\" is not a valid url. Not setting as proxy + """ + And I verify that running `ua config set global_apt_https_proxy=https://localhost:12345` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"https://localhost:12345\" is not working. Not setting as proxy. + """ + When I run `ua config set global_apt_http_proxy=http://:3128` with sudo + And I run `ua config set global_apt_https_proxy=http://:3128` with sudo + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy \".*\"; + Acquire::https::Proxy \".*\"; + """ + + Examples: ubuntu release + | release | + | xenial | + | bionic | + | focal | + | jammy | + + @slow + @series.lts + @uses.config.machine_type.lxd.container + Scenario Outline: Attach command when authenticated proxy is configured globally + Given a `` machine with ubuntu-advantage-tools installed + When I launch a `focal` `proxy` machine + And I run `apt install squid apache2-utils -y` `with sudo` on the `proxy` machine + And I run `htpasswd -bc /etc/squid/passwordfile someuser somepassword` `with sudo` on the `proxy` machine + And I add this text on `/etc/squid/squid.conf` on `proxy` above `http_access deny all`: + """ + dns_v4_first on\nauth_param basic program \/usr\/lib\/squid\/basic_ncsa_auth \/etc\/squid\/passwordfile\nacl topsecret proxy_auth REQUIRED\nhttp_access allow topsecret + """ + And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine + And I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + When I run `ua config set https_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: + """ + Setting snap proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT api.snapcraft.io.* + """ + When I run `ua config set http_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: + """ + Setting snap proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://api.snapcraft.io.* + """ + When I run `apt-get install net-tools` with sudo + # We will guarantee that the machine will only use the proxy when + # running the ua commands + And I run `route del default` with sudo + And I attach `contract_token` with sudo and options `--no-auto-enable` + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT contracts.canonical.com.* + """ + When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + When I run `ua config set global_apt_http_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: + """ + Setting global APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://archive.ubuntu.com.* + """ + When I run `ua config set global_apt_https_proxy=http://someuser:somepassword@:3128` with sudo + Then stdout matches regexp: + """ + Setting global APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*CONNECT esm.ubuntu.com.* + """ + When I run `ua refresh config` with sudo + And I run `apt-get update` with sudo + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout matches regexp: + """ + .*GET.*ubuntu.com/ubuntu/dists.* + """ + Then stdout matches regexp: + """ + .*GET.*archive.ubuntu.com.* + """ + Then stdout matches regexp: + """ + .*GET.*security.ubuntu.com.* + """ + And I verify that running `ua config set global_apt_https_proxy=http://wronguser:wrongpassword@:3128` `with sudo` exits `1` + Then stderr matches regexp: + """ + \"http://wronguser:wrongpassword@.*:3128\" is not working. Not setting as proxy. + """ + + Examples: ubuntu release + | release | + | xenial | + | bionic | + | focal | + | jammy | + + @slow + @series.lts + @uses.config.machine_type.lxd.container + Scenario Outline: Get warning when configuring global or uaclient proxy + Given a `` machine with ubuntu-advantage-tools installed + When I launch a `focal` `proxy` machine + And I run `apt install squid -y` `with sudo` on the `proxy` machine + And I add this text on `/etc/squid/squid.conf` on `proxy` above `http_access deny all`: + """ + dns_v4_first on\nacl all src 0.0.0.0\/0\nhttp_access allow all + """ + And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine + And I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I run `ua config set global_apt_http_proxy=http://:3128` with sudo + And I run `ua config set global_apt_https_proxy=http://:3128` with sudo + Then I verify that files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy \".*\"; + Acquire::https::Proxy \".*\"; + """ + When I run `apt-get update` with sudo + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout matches regexp: + """ + .*GET.*ubuntu.com/ubuntu/dists.* + """ + Then stdout matches regexp: + """ + .*GET.*archive.ubuntu.com.* + """ + Then stdout matches regexp: + """ + .*GET.*security.ubuntu.com.* + """ + When I run `ua config set ua_apt_http_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Warning: Setting the ua scoped apt proxy will overwrite the global apt + proxy previously set via `ua config`. + """ + When I run `ua config set ua_apt_https_proxy=http://:3128` with sudo + Then stdout does not match regexp: + """ + Warning: Setting the ua scoped apt proxy will overwrite the global apt + proxy previously set via `ua config`. + """ + When I run `ua config show` with sudo + Then stdout matches regexp: + """ + global_apt_http_proxy +None + """ + Then stdout matches regexp: + """ + global_apt_https_proxy +None + """ + When I run `ua config unset ua_apt_http_proxy` with sudo + And I run `ua config unset ua_apt_https_proxy` with sudo + And I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + apt_http_proxy: http://:3128 + """ + And I verify that running `ua refresh config` `with sudo` exits `0` + Then stdout matches regexp: + """ + Using deprecated "apt_http_proxy" config field. + Please migrate to using "global_apt_http_proxy" + """ + When I run `ua config show` with sudo + Then stdout matches regexp: + """ + global_apt_http_proxy +http://:3128 + """ + Then stdout matches regexp: + """ + apt_http_proxy +None + """ + When I run `ua config unset global_apt_http_proxy` with sudo + And I run `ua config unset global_apt_https_proxy` with sudo + And I run `ua config unset ua_apt_http_proxy` with sudo + And I run `ua config unset ua_apt_https_proxy` with sudo + And I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + global_apt_http_proxy: http://:3128 + ua_apt_http_proxy: http://:3128 + """ + And I verify that running `ua refresh config` `with sudo` exits `1` + Then stderr matches regexp: + """ + Error: Setting global apt proxy and ua scoped apt proxy + at the same time is unsupported. + Cancelling config process operation. + """ + When I run `ua config show` with sudo + Then stdout matches regexp: + """ + global_apt_http_proxy +http://:3128 + """ + Then stdout matches regexp: + """ + ua_apt_http_proxy +http://:3128 + """ + Then I verify that no files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I run `ua config set global_apt_http_proxy=http://:3128` with sudo + And I run `ua config set global_apt_https_proxy=http://:3128` with sudo + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy \".*\"; + Acquire::https::Proxy \".*\"; + """ + + Examples: ubuntu release + | release | + | xenial | + | bionic | + | focal | + | jammy | + + @slow + @series.lts + @uses.config.machine_type.lxd.container + Scenario Outline: apt_http(s)_proxy still works + Given a `` machine with ubuntu-advantage-tools installed + When I launch a `focal` `proxy` machine + And I run `apt install squid -y` `with sudo` on the `proxy` machine + And I add this text on `/etc/squid/squid.conf` on `proxy` above `http_access deny all`: + """ + dns_v4_first on\nacl all src 0.0.0.0\/0\nhttp_access allow all + """ + And I run `systemctl restart squid.service` `with sudo` on the `proxy` machine + And I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I attach `contract_token` with sudo + When I run `ua status --format yaml` with sudo + Then stdout matches regexp: + """ + attached: true + """ + When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + And I run `ua config set apt_http_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Warning: apt_http_proxy has been renamed to global_apt_http_proxy. + Setting global APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + .*HEAD http://archive.ubuntu.com.* + """ + When I run `ua config set apt_https_proxy=http://:3128` with sudo + Then stdout matches regexp: + """ + Warning: apt_https_proxy has been renamed to global_apt_https_proxy. + Setting global APT proxy + """ + When I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + CONNECT esm.ubuntu.com + """ + Then I verify that files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy \".*:3128\"; + Acquire::https::Proxy \".*:3128\"; + """ + When I run `truncate -s 0 /var/log/squid/access.log` `with sudo` on the `proxy` machine + And I verify `/var/log/squid/access.log` is empty on `proxy` machine + When I run `apt-get update` with sudo + And I run `cat /var/log/squid/access.log` `with sudo` on the `proxy` machine + Then stdout matches regexp: + """ + CONNECT esm.ubuntu.com:443 + """ + Then stdout matches regexp: + """ + GET.*ubuntu.com/ubuntu/dists + """ + Then stdout matches regexp: + """ + GET.*archive.ubuntu.com + """ + Then stdout matches regexp: + """ + GET.*security.ubuntu.com + """ + When I run `ua config unset apt_http_proxy` with sudo + And I run `ua config unset apt_https_proxy` with sudo + And I run `ua refresh config` with sudo + Then I verify that no files exist matching `/etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + ua_config: + apt_http_proxy: http://:3128 + apt_https_proxy: http://:3128 + """ + When I run `ua refresh config` with sudo + Then stdout matches regexp: + """ + Using deprecated "apt_http_proxy" config field. + Please migrate to using "global_apt_http_proxy" + + Using deprecated "apt_https_proxy" config field. + Please migrate to using "global_apt_https_proxy" + + Setting global APT proxy + Successfully processed your ua configuration. + """ + When I run `cat /etc/apt/apt.conf.d/90ubuntu-advantage-aptproxy` with sudo + Then stdout matches regexp: + """ + /\* + \* Autogenerated by ubuntu-advantage-tools + \* Do not edit this file directly + \* + \* To change what ubuntu-advantage-tools sets, use the `ua config set` + \* or the `ua config unset` commands to set/unset either: + \* global_apt_http_proxy and global_apt_https_proxy + \* for a global apt proxy + \* or + \* ua_apt_http_proxy and ua_apt_https_proxy + \* for an apt proxy that only applies to UA related repos. + \*/ + Acquire::http::Proxy \".*:3128\"; + Acquire::https::Proxy \".*:3128\"; + """ + And I verify that running `ua config set apt_https_proxy=https://localhost:12345` `with sudo` exits `1` + Then stdout matches regexp: + """ + Warning: apt_https_proxy has been renamed to global_apt_https_proxy. + """ + Then stderr matches regexp: + """ + \"https://localhost:12345\" is not working. Not setting as proxy. + """ + Examples: ubuntu release + | release | + | xenial | + | bionic | + | focal | + | jammy | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/realtime_kernel.feature ubuntu-advantage-tools-27.9~16.04.1/features/realtime_kernel.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/realtime_kernel.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/realtime_kernel.feature 2022-05-18 19:44:15.000000000 +0000 @@ -42,36 +42,12 @@ | xenial | 16.04 LTS | Xenial Xerus | | bionic | 18.04 LTS | Bionic Beaver | | focal | 20.04 LTS | Focal Fossa | - | jammy | 22.04 | Jammy Jellyfish | @series.jammy - @uses.config.machine_type.gcp.generic + @uses.config.machine_type.lxd.vm Scenario Outline: Enable Real-Time Kernel service Given a `` machine with ubuntu-advantage-tools installed - When I create the file `/home/ubuntu/machine-token-overlay.json` with the following: - """ - { - "machineTokenInfo": { - "contractInfo": { - "resourceEntitlements": [ - { - "type": "realtime-kernel", - "affordances": { - "series": ["jammy"] - } - } - ] - } - } - } - """ - And I append the following on uaclient config: - """ - features: - machine_token_overlay: "/home/ubuntu/machine-token-overlay.json" - """ When I attach `contract_token` with sudo - And I run `ua disable livepatch --assume-yes` with sudo Then I verify that running `ua enable realtime-kernel` `as non-root` exits `1` And I will see the following on stderr: """ @@ -89,14 +65,15 @@ The real-time kernel is a beta version of the 22.04 Ubuntu kernel with the PREEMPT_RT patchset integrated for x86_64 and ARM64. - .*You will not be able to revert to your original kernel after enabling real-time..* + .*This will change your kernel. You will need to manually configure grub to + revert back to your original kernel after enabling real-time..* Do you want to continue\? \[ default = Yes \]: \(Y/n\) Updating package lists Installing Real-Time Kernel packages Real-Time Kernel enabled A reboot is required to complete install. """ - When I run `apt-cache policy linux-realtime` as non-root + When I run `apt-cache policy ubuntu-realtime` as non-root Then stdout does not match regexp: """ .*Installed: \(none\) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/schemas/ua_operation.json ubuntu-advantage-tools-27.9~16.04.1/features/schemas/ua_operation.json --- ubuntu-advantage-tools-27.8~16.04.1/features/schemas/ua_operation.json 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/schemas/ua_operation.json 2022-05-18 19:44:15.000000000 +0000 @@ -23,6 +23,9 @@ }, "service": { "type": ["null", "string"] + }, + "additional_info": { + "type": "object" } }, "patternProperties": { diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/schemas/ua_security_status.json ubuntu-advantage-tools-27.9~16.04.1/features/schemas/ua_security_status.json --- ubuntu-advantage-tools-27.8~16.04.1/features/schemas/ua_security_status.json 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/schemas/ua_security_status.json 2022-05-18 19:44:15.000000000 +0000 @@ -31,6 +31,24 @@ "num_installed_packages": { "type": "integer" }, + "num_main_packages": { + "type": "integer" + }, + "num_multiverse_packages": { + "type": "integer" + }, + "num_restricted_packages": { + "type": "integer" + }, + "num_universe_packages": { + "type": "integer" + }, + "num_third_party_packages": { + "type": "integer" + }, + "num_unknown_packages": { + "type": "integer" + }, "num_esm_infra_packages": { "type": "integer" }, diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/staging_commands.feature ubuntu-advantage-tools-27.9~16.04.1/features/staging_commands.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/staging_commands.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/staging_commands.feature 2022-05-18 19:44:15.000000000 +0000 @@ -87,6 +87,6 @@ Examples: ubuntu release | release | apps-pkg | + | xenial | jq | | bionic | bundler | | focal | ant | - | xenial | jq | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/steps/steps.py ubuntu-advantage-tools-27.9~16.04.1/features/steps/steps.py --- ubuntu-advantage-tools-27.8~16.04.1/features/steps/steps.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/steps/steps.py 2022-05-18 19:44:15.000000000 +0000 @@ -18,6 +18,7 @@ ) from features.environment import ( + build_debs_from_sbuild, capture_container_as_image, create_instance_with_uat_installed, ) @@ -34,6 +35,8 @@ IMAGE_BUILD_PREFIX = "ubuntu-behave-image-build" IMAGE_PREFIX = "ubuntu-behave-image" +ERROR_CODE = "1" + def add_test_name_suffix(context, series, prefix): pr_number = os.environ.get("UACLIENT_BEHAVE_JENKINS_CHANGE_ID") @@ -131,6 +134,56 @@ ) +@when("I have the `{series}` debs under test in `{dest}`") +def when_i_have_the_debs_under_test(context, series, dest): + if context.config.build_pr: + deb_paths = build_debs_from_sbuild(context, series) + + for deb_path in deb_paths: + tools_or_pro = "tools" if "tools" in deb_path else "pro" + dest_path = "{}/ubuntu-advantage-{}.deb".format(dest, tools_or_pro) + context.instances["uaclient"].push_file(deb_path, dest_path) + else: + if context.config.enable_proposed: + ppa_opts = "" + else: + if context.config.ppa.startswith("ppa"): + ppa = context.config.ppa + else: + # assumes format "http://domain.name/user/ppa/ubuntu" + match = re.match( + r"https?://[\w.]+/([^/]+/[^/]+)", context.config.ppa + ) + if not match: + raise AssertionError( + "ppa is in unsupported format: {}".format( + context.config.ppa + ) + ) + ppa = "ppa:{}".format(match.group(1)) + ppa_opts = "--distro ppa --ppa {}".format(ppa) + download_cmd = "pull-lp-debs {} ubuntu-advantage-tools {}".format( + ppa_opts, series + ) + when_i_run_command( + context, "apt-get install -y ubuntu-dev-tools", "with sudo" + ) + when_i_run_command(context, download_cmd, "with sudo") + logging.info("Download command `{}`".format(download_cmd)) + logging.info("stdout: {}".format(context.process.stdout)) + logging.info("stderr: {}".format(context.process.stderr)) + when_i_run_shell_command( + context, + "cp ubuntu-advantage-tools*.deb ubuntu-advantage-tools.deb", + "with sudo", + ) + when_i_run_shell_command( + context, + "cp ubuntu-advantage-pro*.deb ubuntu-advantage-pro.deb", + "with sudo", + ) + + @when( "I launch a `{series}` `{instance_name}` machine with ingress ports `{ports}`" # noqa ) @@ -309,6 +362,11 @@ context.process = process +@when("I run shell command `{command}` {user_spec}") +def when_i_run_shell_command(context, command, user_spec): + when_i_run_command(context, 'sh -c "{}"'.format(command), user_spec) + + @when("I fix `{issue}` by attaching to a subscription with `{token_type}`") def when_i_fix_a_issue_by_attaching(context, issue, token_type): token = getattr(context.config, token_type) @@ -477,6 +535,9 @@ @when("I replace `{original}` in `{filename}` with `{new}`") def when_i_replace_string_in_file(context, original, filename, new): + new = new.replace("\\", r"\\") + new = new.replace("/", r"\/") + new = new.replace("&", r"\&") when_i_run_command( context, "sed -i 's/{original}/{new}/' {filename}".format( @@ -522,7 +583,10 @@ @then("{stream} matches regexp") def then_stream_matches_regexp(context, stream): content = getattr(context.process, stream).strip() - assert_that(content, matches_regexp(context.text)) + text = context.text + if "" in text and "proxy" in context.instances: + text = text.replace("", context.instances["proxy"].ip) + assert_that(content, matches_regexp(text)) @then("{stream} contains substring") @@ -590,6 +654,19 @@ @when( + "I verify that running attach `{spec}` using expired token with json response fails" # noqa +) +def when_i_verify_attach_expired_token_with_json_response(context, spec): + change_contract_endpoint_to_staging(context, user_spec="with sudo") + cmd = "ua attach {} --format json".format( + context.config.contract_token_staging_expired + ) + then_i_verify_that_running_cmd_with_spec_exits_with_codes( + context=context, cmd_name=cmd, spec=spec, exit_codes=ERROR_CODE + ) + + +@when( "I verify that running `{cmd_name}` `{spec}` and stdin `{stdin}` exits `{exit_codes}`" # noqa ) def then_i_verify_that_running_cmd_with_spec_and_stdin_exits_with_codes( @@ -882,6 +959,109 @@ jsonschema.validate(instance=instance, schema=json.load(schema_file)) +@then("`{file_name}` is not present in any docker image layer") +def file_is_not_present_in_any_docker_image_layer(context, file_name): + when_i_run_command( + context, + "find /var/lib/docker/overlay2 -name {}".format(file_name), + "with sudo", + ) + results = context.process.stdout.strip() + if results: + raise AssertionError( + 'found "{}"'.format(", ".join(results.split("\n"))) + ) + + +# This defines "not significantly larger" as "less than 2MB larger" +@then( + "docker image `{name}` is not significantly larger than `ubuntu:{series}` with `{package}` installed" # noqa: E501 +) +def docker_image_is_not_larger(context, name, series, package): + base_image_name = "ubuntu:{}".format(series) + base_upgraded_image_name = "{}-with-test-package".format(series) + + # We need to compare against the base image after apt upgrade + # and package install + dockerfile = """\ + FROM {} + RUN apt-get update \\ + && apt-get install -y {} \\ + && rm -rf /var/lib/apt/lists/* + """.format( + base_image_name, package + ) + context.text = dockerfile + when_i_create_file_with_content(context, "Dockerfile.base") + when_i_run_command( + context, + "docker build . -f Dockerfile.base -t {}".format( + base_upgraded_image_name + ), + "with sudo", + ) + + # find image sizes + when_i_run_shell_command( + context, "docker inspect {} | jq .[0].Size".format(name), "with sudo" + ) + custom_image_size = int(context.process.stdout.strip()) + when_i_run_shell_command( + context, + "docker inspect {} | jq .[0].Size".format(base_upgraded_image_name), + "with sudo", + ) + base_image_size = int(context.process.stdout.strip()) + + # Get ua test deb size + when_i_run_command(context, "du ubuntu-advantage-tools.deb", "with sudo") + # Example out: "1234\tubuntu-advantage-tools.deb" + ua_test_deb_size = ( + int(context.process.stdout.strip().split("\t")[0]) * 1024 + ) # KB -> B + + # Give us some space for bloat we don't control: 2MB -> B + extra_space = 2 * 1024 * 1024 + + if custom_image_size > (base_image_size + ua_test_deb_size + extra_space): + raise AssertionError( + "Custom image size ({}) is over 2MB greater than the base image" + " size ({}) + ua test deb size ({})".format( + custom_image_size, base_image_size, ua_test_deb_size + ) + ) + logging.debug( + "custom image size ({})\n" + "base image size ({})\n" + "ua test deb size ({})".format( + custom_image_size, base_image_size, ua_test_deb_size + ) + ) + + +@then( + "on `{release}`, systemd status output says memory usage is less than `{mb_limit}` MB" # noqa +) +def systemd_memory_usage_less_than(context, release, mb_limit): + curr_release = context.active_outline["release"] + if release != curr_release: + logging.debug("Skipping for {}".format(curr_release)) + return + match = re.search(r"Memory: (.*)M", context.process.stdout.strip()) + if match is None: + raise AssertionError( + "Memory usage not present in current process stdout" + ) + mb_used = float(match.group(1)) + logging.debug("Found {}M".format(mb_used)) + + mb_limit_float = float(mb_limit) + if mb_used > mb_limit_float: + raise AssertionError( + "Using more memory than expected ({}M)".format(mb_used) + ) + + def get_command_prefix_for_user_spec(user_spec): prefix = [] if user_spec == "with sudo": diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/ubuntu_pro_fips.feature ubuntu-advantage-tools-27.9~16.04.1/features/ubuntu_pro_fips.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/ubuntu_pro_fips.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/ubuntu_pro_fips.feature 2022-05-18 19:44:15.000000000 +0000 @@ -421,7 +421,8 @@ @series.focal @uses.config.machine_type.azure.pro.fips @uses.config.machine_type.aws.pro.fips - Scenario Outline: Check fips-updates can be enable in a focal PRO FIPS machine + @uses.config.machine_type.gcp.pro.fips + Scenario Outline: Check fips-updates can be enabled in a focal PRO FIPS machine Given a `` machine with ubuntu-advantage-tools installed When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: """ @@ -469,3 +470,209 @@ Examples: ubuntu release | release | | focal | + + @series.focal + @series.bionic + @uses.config.machine_type.gcp.pro.fips + Scenario Outline: Check fips is enabled correctly on Ubuntu pro fips GCP machine + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + """ + And I run `ua auto-attach` with sudo + And I run `ua status --wait` as non-root + And I run `ua status` as non-root + Then stdout matches regexp: + """ + esm-apps +yes +enabled +UA Apps: Extended Security Maintenance \(ESM\) + esm-infra +yes +enabled +UA Infra: Extended Security Maintenance \(ESM\) + fips +yes +enabled +NIST-certified core packages + fips-updates +yes +disabled +NIST-certified core packages with priority security updates + livepatch +yes +n/a +Canonical Livepatch service + """ + And I verify that running `apt update` `with sudo` exits `0` + And I verify that running `grep Traceback /var/log/ubuntu-advantage.log` `with sudo` exits `1` + When I run `uname -r` as non-root + Then stdout matches regexp: + """ + + """ + When I run `apt-cache policy ubuntu-gcp-fips` as non-root + Then stdout does not match regexp: + """ + .*Installed: \(none\) + """ + When I run `cat /proc/sys/crypto/fips_enabled` with sudo + Then I will see the following on stdout: + """ + 1 + """ + When I run `systemctl start ua-auto-attach.service` with sudo + And I verify that running `systemctl status ua-auto-attach.service` `as non-root` exits `0,3` + Then stdout matches regexp: + """ + .*status=0\/SUCCESS.* + """ + And stdout matches regexp: + """ + Skipping attach: Instance '[0-9a-z\-]+' is already attached. + """ + When I run `ua auto-attach` with sudo + Then stderr matches regexp: + """ + Skipping attach: Instance '[0-9a-z\-]+' is already attached. + """ + When I run `apt-cache policy` with sudo + Then apt-cache policy for the following url has permission `500` + """ + https://esm.ubuntu.com/infra/ubuntu -infra-updates/main amd64 Packages + """ + And apt-cache policy for the following url has permission `500` + """ + https://esm.ubuntu.com/infra/ubuntu -infra-security/main amd64 Packages + """ + And apt-cache policy for the following url has permission `500` + """ + https://esm.ubuntu.com/apps/ubuntu -apps-updates/main amd64 Packages + """ + And apt-cache policy for the following url has permission `500` + """ + https://esm.ubuntu.com/apps/ubuntu -apps-security/main amd64 Packages + """ + And apt-cache policy for the following url has permission `1001` + """ + amd64 Packages + """ + And I verify that running `apt update` `with sudo` exits `0` + When I run `apt install -y /-infra-security` with sudo, retrying exit [100] + And I run `apt-cache policy ` as non-root + Then stdout matches regexp: + """ + \s*500 https://esm.ubuntu.com/infra/ubuntu -infra-security/main amd64 Packages + \s*500 https://esm.ubuntu.com/infra/ubuntu -infra-updates/main amd64 Packages + """ + And stdout matches regexp: + """ + Installed: .*[~+]esm + """ + When I run `apt install -y /-apps-security` with sudo, retrying exit [100] + And I run `apt-cache policy ` as non-root + Then stdout matches regexp: + """ + Version table: + \s*\*\*\* .* 500 + \s*500 https://esm.ubuntu.com/apps/ubuntu -apps-security/main amd64 Packages + """ + When I run `ua enable fips-updates --assume-yes` with sudo + Then I will see the following on stdout: + """ + One moment, checking your subscription first + Disabling incompatible service: FIPS + Updating package lists + Installing FIPS Updates packages + FIPS Updates enabled + A reboot is required to complete install. + """ + When I run `ua status` with sudo + Then stdout matches regexp: + """ + fips +yes +n/a +NIST-certified core packages + fips-updates +yes +enabled +NIST-certified core packages with priority security updates + """ + When I reboot the `` machine + And I run `uname -r` as non-root + Then stdout matches regexp: + """ + + """ + When I run `apt-cache policy ubuntu-gcp-fips` as non-root + Then stdout does not match regexp: + """ + .*Installed: \(none\) + """ + When I run `cat /proc/sys/crypto/fips_enabled` with sudo + Then I will see the following on stdout: + """ + 1 + """ + + Examples: ubuntu release + | release | infra-pkg | apps-pkg | fips-apt-source | fips-kernel-version | + | bionic | libkrad0 | bundler | https://esm.ubuntu.com/fips/ubuntu bionic/main | gcp-fips | + | focal | hello | 389-ds | https://esm.ubuntu.com/fips/ubuntu focal/main | gcp-fips | + + @series.focal + @uses.config.machine_type.gcp.pro.fips + Scenario Outline: Check fips packages are correctly installed on GCP Pro Focal machine + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + features: + allow_xenial_fips_on_cloud: true + """ + And I run `ua auto-attach` with sudo + And I run `ua status --wait` as non-root + And I run `ua status` as non-root + Then stdout matches regexp: + """ + esm-apps +yes +enabled +UA Apps: Extended Security Maintenance \(ESM\) + esm-infra +yes +enabled +UA Infra: Extended Security Maintenance \(ESM\) + fips +yes +enabled +NIST-certified core packages + fips-updates +yes +disabled +NIST-certified core packages with priority security updates + livepatch +yes +n/a +Canonical Livepatch service + """ + And I verify that running `apt update` `with sudo` exits `0` + And I verify that running `grep Traceback /var/log/ubuntu-advantage.log` `with sudo` exits `1` + And I verify that `openssh-server` is installed from apt source `` + And I verify that `openssh-client` is installed from apt source `` + And I verify that `strongswan` is installed from apt source `` + And I verify that `strongswan-hmac` is installed from apt source `` + + Examples: ubuntu release + | release | fips-apt-source | + | focal | https://esm.ubuntu.com/fips/ubuntu focal/main | + + @series.bionic + @uses.config.machine_type.gcp.pro.fips + Scenario Outline: Check fips packages are correctly installed on GCP Pro Bionic machines + Given a `` machine with ubuntu-advantage-tools installed + When I create the file `/etc/ubuntu-advantage/uaclient.conf` with the following: + """ + contract_url: 'https://contracts.canonical.com' + data_dir: /var/lib/ubuntu-advantage + log_level: debug + log_file: /var/log/ubuntu-advantage.log + features: + allow_xenial_fips_on_cloud: true + """ + And I run `ua auto-attach` with sudo + And I run `ua status --wait` as non-root + And I run `ua status` as non-root + Then stdout matches regexp: + """ + esm-apps +yes +enabled +UA Apps: Extended Security Maintenance \(ESM\) + esm-infra +yes +enabled +UA Infra: Extended Security Maintenance \(ESM\) + fips +yes +enabled +NIST-certified core packages + fips-updates +yes +disabled +NIST-certified core packages with priority security updates + livepatch +yes +n/a +Canonical Livepatch service + """ + And I verify that running `apt update` `with sudo` exits `0` + And I verify that running `grep Traceback /var/log/ubuntu-advantage.log` `with sudo` exits `1` + And I verify that `openssh-server` is installed from apt source `` + And I verify that `openssh-client` is installed from apt source `` + And I verify that `strongswan` is installed from apt source `` + And I verify that `openssh-server-hmac` is installed from apt source `` + And I verify that `openssh-client-hmac` is installed from apt source `` + And I verify that `strongswan-hmac` is installed from apt source `` + + Examples: ubuntu release + | release | fips-apt-source | + | bionic | https://esm.ubuntu.com/fips/ubuntu bionic/main | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/ubuntu_upgrade.feature ubuntu-advantage-tools-27.9~16.04.1/features/ubuntu_upgrade.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/ubuntu_upgrade.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/ubuntu_upgrade.feature 2022-05-18 19:44:15.000000000 +0000 @@ -2,13 +2,17 @@ Feature: Upgrade between releases when uaclient is attached @slow - @series.focal - @series.impish + @series.all @uses.config.machine_type.lxd.container @upgrade - Scenario Outline: Attached upgrade across releases + Scenario Outline: Attached upgrade Given a `` machine with ubuntu-advantage-tools installed When I attach `contract_token` with sudo + And I run `` with sudo + # update-manager-core requires ua < 28. Our tests that build the package will + # generate ua with version 28. We are removing that package here to make sure + # do-release-upgrade will be able to run + And I run `apt remove update-manager-core -y` with sudo And I run `apt-get dist-upgrade --assume-yes` with sudo # Some packages upgrade may require a reboot And I reboot the `` machine @@ -17,6 +21,7 @@ [Sources] AllowThirdParty=yes """ + And I run `sed -i 's/Prompt=lts/Prompt=/' /etc/update-manager/release-upgrades` with sudo And I run `do-release-upgrade --frontend DistUpgradeViewNonInteractive` `with sudo` and stdin `y\n` And I reboot the `` machine And I run `lsb_release -cs` as non-root @@ -28,65 +33,27 @@ And I will see the following on stdout: """ """ + When I run `ua refresh` with sudo When I run `ua status` with sudo Then stdout matches regexp: """ - esm-infra yes n/a + +yes + """ When I run `ua detach --assume-yes` with sudo Then stdout matches regexp: - """ - This machine is now detached. - """ - - Examples: ubuntu release - | release | next_release | devel_release | - | focal | jammy | --devel-release | - | impish | jammy | --devel-release | - - @slow - @series.xenial - @series.bionic - @uses.config.machine_type.lxd.container - @upgrade - Scenario Outline: Attached upgrade across LTS releases - Given a `` machine with ubuntu-advantage-tools installed - When I attach `contract_token` with sudo - And I run `apt-get dist-upgrade --assume-yes` with sudo - # Some packages upgrade may require a reboot - And I reboot the `` machine - And I create the file `/etc/update-manager/release-upgrades.d/ua-test.cfg` with the following """ - [Sources] - AllowThirdParty=yes + This machine is now detached. """ - Then I verify that running `do-release-upgrade --frontend DistUpgradeViewNonInteractive` `with sudo` exits `0` - When I reboot the `` machine - And I run `lsb_release -cs` as non-root - Then I will see the following on stdout: - """ - - """ - And I verify that running `egrep "|disabled" /etc/apt/sources.list.d/*` `as non-root` exits `2` - And I will see the following on stdout: - """ - """ - When I run `ua status` with sudo - Then stdout matches regexp: - """ - esm-infra yes enabled - """ - When I run `ua disable esm-infra` with sudo - And I run `ua status` with sudo - Then stdout matches regexp: - """ - esm-infra +yes +disabled +UA Infra: Extended Security Maintenance \(ESM\) - """ Examples: ubuntu release - | release | next_release | - | xenial | bionic | - | bionic | focal | + | release | next_release | prompt | devel_release | service | service_status | before_cmd | + | xenial | bionic | lts | | esm-infra | enabled | true | + | bionic | focal | lts | | esm-infra | enabled | true | + | bionic | focal | lts | | usg | enabled | ua enable cis | + | focal | impish | normal | | esm-infra | n/a | true | + | focal | jammy | lts | --devel-release | esm-infra | enabled | true | + | impish | jammy | lts | | esm-infra | disabled | true | + | jammy | kinetic | normal | --devel-release | esm-infra | n/a | true | @slow @series.xenial @@ -99,17 +66,17 @@ And I run `ua disable livepatch` with sudo And I run `ua enable --assume-yes` with sudo Then stdout matches regexp: - """ - Updating package lists - Installing packages - enabled - A reboot is required to complete install - """ + """ + Updating package lists + Installing packages + enabled + A reboot is required to complete install + """ When I run `ua status --all` with sudo Then stdout matches regexp: - """ - +yes enabled - """ + """ + +yes enabled + """ And I verify that running `apt update` `with sudo` exits `0` When I reboot the `` machine And I run `uname -r` as non-root @@ -148,9 +115,9 @@ """ When I run `uname -r` as non-root Then stdout matches regexp: - """ - fips - """ + """ + fips + """ When I run `cat /proc/sys/crypto/fips_enabled` with sudo Then I will see the following on stdout: """ @@ -161,44 +128,3 @@ | release | next_release | fips-service | fips-name | source-file | | xenial | bionic | fips | FIPS | ubuntu-fips | | xenial | bionic | fips-updates | FIPS Updates | ubuntu-fips-updates | - - @slow - @series.bionic - @uses.config.machine_type.lxd.container - @upgrade - Scenario Outline: Attached upgrade with cis enabled across LTS releases - Given a `` machine with ubuntu-advantage-tools installed - When I attach `contract_token` with sudo - And I run `ua enable cis` with sudo - # update-manager-core requires ua < 28. Our tests that build the package will - # generate ua with version 28. We are removing that package here to make sure - # do-release-upgrade will be able to run - And I run `apt remove update-manager-core -y` with sudo - And I run `apt-get dist-upgrade --assume-yes` with sudo - # Some packages upgrade may require a reboot - And I reboot the `` machine - And I create the file `/etc/update-manager/release-upgrades.d/ua-test.cfg` with the following - """ - [Sources] - AllowThirdParty=yes - """ - Then I verify that running `do-release-upgrade --frontend DistUpgradeViewNonInteractive` `with sudo` exits `0` - When I reboot the `` machine - And I run `lsb_release -cs` as non-root - Then I will see the following on stdout: - """ - - """ - And I verify that running `egrep "|disabled" /etc/apt/sources.list.d/*` `as non-root` exits `2` - And I will see the following on stdout: - """ - """ - When I run `ua status` with sudo - Then stdout matches regexp: - """ - usg +yes +enabled - """ - - Examples: ubuntu release - | release | next_release | - | bionic | focal | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/ubuntu_upgrade_unattached.feature ubuntu-advantage-tools-27.9~16.04.1/features/ubuntu_upgrade_unattached.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/ubuntu_upgrade_unattached.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/ubuntu_upgrade_unattached.feature 2022-05-18 19:44:15.000000000 +0000 @@ -2,54 +2,10 @@ Feature: Upgrade between releases when uaclient is unattached @slow - @series.focal - @series.impish + @series.all @uses.config.machine_type.lxd.container @upgrade - Scenario Outline: Unattached upgrade across releases - Given a `` machine with ubuntu-advantage-tools installed - When I run `apt-get dist-upgrade --assume-yes` with sudo - # Some packages upgrade may require a reboot - And I reboot the `` machine - And I create the file `/etc/update-manager/release-upgrades.d/ua-test.cfg` with the following - """ - [Sources] - AllowThirdParty=yes - """ - And I run `sed -i 's/Prompt=lts/Prompt=normal/' /etc/update-manager/release-upgrades` with sudo - And I run `do-release-upgrade --frontend DistUpgradeViewNonInteractive` `with sudo` and stdin `y\n` - And I reboot the `` machine - And I run `lsb_release -cs` as non-root - Then I will see the following on stdout: - """ - - """ - And I verify that running `egrep "|disabled" /etc/apt/sources.list.d/*` `as non-root` exits `2` - And I will see the following on stdout: - """ - """ - When I run `ua status` with sudo - Then stdout matches regexp: - """ - esm-infra no +UA Infra: Extended Security Maintenance \(ESM\) - """ - When I attach `contract_token` with sudo - Then stdout matches regexp: - """ - esm-infra yes +n/a - """ - - Examples: ubuntu release - | release | next_release | devel_release | - | focal | impish | | - | impish | jammy | --devel-release | - - @slow - @series.xenial - @series.bionic - @uses.config.machine_type.lxd.container - @upgrade - Scenario Outline: Unattached upgrade across LTS releases + Scenario Outline: Unattached upgrade Given a `` machine with ubuntu-advantage-tools installed # update-manager-core requires ua < 28. Our tests that build the package will # generate ua with version 28. We are removing that package here to make sure @@ -63,8 +19,9 @@ [Sources] AllowThirdParty=yes """ - Then I verify that running `do-release-upgrade --frontend DistUpgradeViewNonInteractive` `with sudo` exits `0` - When I reboot the `` machine + And I run `sed -i 's/Prompt=lts/Prompt=/' /etc/update-manager/release-upgrades` with sudo + And I run `do-release-upgrade --frontend DistUpgradeViewNonInteractive` `with sudo` and stdin `y\n` + And I reboot the `` machine And I run `lsb_release -cs` as non-root Then I will see the following on stdout: """ @@ -74,24 +31,17 @@ And I will see the following on stdout: """ """ - When I run `ua status` with sudo - Then stdout matches regexp: - """ - esm-infra yes +UA Infra: Extended Security Maintenance \(ESM\) - """ When I attach `contract_token` with sudo Then stdout matches regexp: """ - esm-infra yes +enabled - """ - When I run `ua disable esm-infra` with sudo - And I run `ua status` with sudo - Then stdout matches regexp: - """ - esm-infra yes +disabled + esm-infra +yes + """ Examples: ubuntu release - | release | next_release | - | xenial | bionic | - | bionic | focal | + | release | next_release | prompt | devel_release | service_status | + | xenial | bionic | lts | | enabled | + | bionic | focal | lts | | enabled | + | focal | impish | normal | | n/a | + | focal | jammy | lts | --devel-release | enabled | + | impish | jammy | lts | | enabled | + | jammy | kinetic | normal | --devel-release | n/a | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/unattached_commands.feature ubuntu-advantage-tools-27.9~16.04.1/features/unattached_commands.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/unattached_commands.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/unattached_commands.feature 2022-05-18 19:44:15.000000000 +0000 @@ -137,46 +137,6 @@ @series.all @uses.config.machine_type.lxd.container - Scenario Outline: Unattached command known and unknown services in a ubuntu machine - Given a `` machine with ubuntu-advantage-tools installed - When I verify that running `ua ` `as non-root` exits `1` - Then I will see the following on stderr: - """ - This command must be run as root (try using sudo). - """ - When I verify that running `ua ` `with sudo` exits `1` - Then stderr matches regexp: - """ - To use '' you need an Ubuntu Advantage subscription - Personal and community subscriptions are available at no charge - See https://ubuntu.com/advantage - """ - - Examples: ua commands - | release | command | service | - | bionic | enable | livepatch | - | bionic | disable | livepatch | - | bionic | enable | unknown | - | bionic | disable | unknown | - | focal | enable | livepatch | - | focal | disable | livepatch | - | focal | enable | unknown | - | focal | disable | unknown | - | xenial | enable | livepatch | - | xenial | disable | livepatch | - | xenial | enable | unknown | - | xenial | disable | unknown | - | impish | enable | livepatch | - | impish | disable | livepatch | - | impish | enable | unknown | - | impish | disable | unknown | - | jammy | enable | livepatch | - | jammy | disable | livepatch | - | jammy | enable | unknown | - | jammy | disable | unknown | - - @series.all - @uses.config.machine_type.lxd.container Scenario Outline: Help command on an unattached machine Given a `` machine with ubuntu-advantage-tools installed When I run `ua help esm-infra` as non-root @@ -214,317 +174,11 @@ | focal | yes | | xenial | yes | | impish | no | - | jammy | no | - - - @series.all - @uses.config.machine_type.lxd.container - Scenario Outline: Useful SSL failure message when there aren't any ca-certs - Given a `` machine with ubuntu-advantage-tools installed - When I run `apt remove ca-certificates -y` with sudo - When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` - Then stderr matches regexp: - """ - Failed to access URL: https://.* - Cannot verify certificate of server - Please install "ca-certificates" and try again. - """ - When I run `apt install ca-certificates -y` with sudo - When I run `mv /etc/ssl/certs /etc/ssl/wronglocation` with sudo - When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` - Then stderr matches regexp: - """ - Failed to access URL: https://.* - Cannot verify certificate of server - Please check your openssl configuration. - """ - Examples: ubuntu release - | release | - | xenial | - | bionic | - | focal | - | impish | - | jammy | - - @series.focal - @uses.config.machine_type.lxd.container - Scenario Outline: Fix command on an unattached machine - Given a `` machine with ubuntu-advantage-tools installed - When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: CVE-1800-123456 not found. - """ - When I verify that running `ua fix USN-12345-12` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: USN-12345-12 not found. - """ - When I verify that running `ua fix CVE-12345678-12` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: issue "CVE-12345678-12" is not recognized. - Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" - """ - When I verify that running `ua fix USN-12345678-12` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: issue "USN-12345678-12" is not recognized. - Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" - """ - When I run `apt install -y libawl-php=0.60-1 --allow-downgrades` with sudo - And I run `ua fix USN-4539-1` with sudo - Then stdout matches regexp: - """ - USN-4539-1: AWL vulnerability - Found CVEs: - https://ubuntu.com/security/CVE-2020-11728 - 1 affected source package is installed: awl - \(1/1\) awl: - A fix is available in Ubuntu standard updates. - .*\{ apt update && apt install --only-upgrade -y libawl-php \}.* - .*✔.* USN-4539-1 is resolved. - """ - When I run `ua fix CVE-2020-28196` as non-root - Then stdout matches regexp: - """ - CVE-2020-28196: Kerberos vulnerability - https://ubuntu.com/security/CVE-2020-28196 - 1 affected source package is installed: krb5 - \(1/1\) krb5: - A fix is available in Ubuntu standard updates. - The update is already installed. - .*✔.* CVE-2020-28196 is resolved. - """ - - Examples: ubuntu release details - | release | - | focal | - - @series.xenial - @uses.config.contract_token - @uses.config.machine_type.lxd.container - Scenario Outline: Fix command on an unattached machine - Given a `` machine with ubuntu-advantage-tools installed - When I run `apt install -y libawl-php` with sudo - And I reboot the `` machine - And I verify that running `ua fix USN-4539-1` `as non-root` exits `1` - Then stdout matches regexp: - """ - USN-4539-1: AWL vulnerability - Found CVEs: - https://ubuntu.com/security/CVE-2020-11728 - 1 affected source package is installed: awl - \(1/1\) awl: - Sorry, no fix is available. - 1 package is still affected: awl - .*✘.* USN-4539-1 is not resolved. - """ - When I run `ua fix CVE-2020-15180` as non-root - Then stdout matches regexp: - """ - CVE-2020-15180: MariaDB vulnerabilities - https://ubuntu.com/security/CVE-2020-15180 - No affected source packages are installed. - .*✔.* CVE-2020-15180 does not affect your system. - """ - When I run `ua fix CVE-2020-28196` as non-root - Then stdout matches regexp: - """ - CVE-2020-28196: Kerberos vulnerability - https://ubuntu.com/security/CVE-2020-28196 - 1 affected source package is installed: krb5 - \(1/1\) krb5: - A fix is available in Ubuntu standard updates. - The update is already installed. - .*✔.* CVE-2020-28196 is resolved. - """ - When I run `DEBIAN_FRONTEND=noninteractive apt-get install -y expat=2.1.0-7 swish-e matanza ghostscript` with sudo - And I verify that running `ua fix CVE-2017-9233` `with sudo` exits `1` - Then stdout matches regexp: - """ - CVE-2017-9233: Expat vulnerability - https://ubuntu.com/security/CVE-2017-9233 - 3 affected source packages are installed: expat, matanza, swish-e - \(1/3, 2/3\) matanza, swish-e: - Sorry, no fix is available. - \(3/3\) expat: - A fix is available in Ubuntu standard updates. - .*\{ apt update && apt install --only-upgrade -y expat \}.* - 2 packages are still affected: matanza, swish-e - .*✘.* CVE-2017-9233 is not resolved. - """ - When I fix `USN-5079-2` by attaching to a subscription with `contract_token_staging_expired` - Then stdout matches regexp - """ - USN-5079-2: curl vulnerabilities - Found CVEs: - https://ubuntu.com/security/CVE-2021-22946 - https://ubuntu.com/security/CVE-2021-22947 - 1 affected source package is installed: curl - \(1/1\) curl: - A fix is available in UA Infra. - The update is not installed because this system is not attached to a - subscription. - - Choose: \[S\]ubscribe at ubuntu.com \[A\]ttach existing token \[C\]ancel - > Enter your token \(from https://ubuntu.com/advantage\) to attach this system: - > .*\{ ua attach .*\}.* - Attach denied: - Contract ".*" expired on .* - Visit https://ubuntu.com/advantage to manage contract tokens. - 1 package is still affected: curl - .*✘.* USN-5079-2 is not resolved. - """ - When I fix `USN-5079-2` by attaching to a subscription with `contract_token` - Then stdout matches regexp: - """ - USN-5079-2: curl vulnerabilities - Found CVEs: - https://ubuntu.com/security/CVE-2021-22946 - https://ubuntu.com/security/CVE-2021-22947 - 1 affected source package is installed: curl - \(1/1\) curl: - A fix is available in UA Infra. - The update is not installed because this system is not attached to a - subscription. - - Choose: \[S\]ubscribe at ubuntu.com \[A\]ttach existing token \[C\]ancel - > Enter your token \(from https://ubuntu.com/advantage\) to attach this system: - > .*\{ ua attach .*\}.* - Updating package lists - UA Apps: ESM enabled - Updating package lists - UA Infra: ESM enabled - """ - And stdout matches regexp: - """ - .*\{ apt update && apt install --only-upgrade -y curl libcurl3-gnutls \}.* - .*✔.* USN-5079-2 is resolved. - """ - When I verify that running `ua fix USN-5051-2` `with sudo` exits `2` - Then stdout matches regexp: - """ - USN-5051-2: OpenSSL vulnerability - Found CVEs: - https://ubuntu.com/security/CVE-2021-3712 - 1 affected source package is installed: openssl - \(1/1\) openssl: - A fix is available in UA Infra. - .*\{ apt update && apt install --only-upgrade -y libssl1.0.0 openssl \}.* - A reboot is required to complete fix operation. - .*✘.* USN-5051-2 is not resolved. - """ - - Examples: ubuntu release details - | release | - | xenial | - - @series.bionic - @uses.config.machine_type.lxd.container - Scenario: Fix command on an unattached machine - Given a `bionic` machine with ubuntu-advantage-tools installed - When I verify that running `ua fix CVE-1800-123456` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: CVE-1800-123456 not found. - """ - When I verify that running `ua fix USN-12345-12` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: USN-12345-12 not found. - """ - When I verify that running `ua fix CVE-12345678-12` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: issue "CVE-12345678-12" is not recognized. - Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" - """ - When I verify that running `ua fix USN-12345678-12` `as non-root` exits `1` - Then I will see the following on stderr: - """ - Error: issue "USN-12345678-12" is not recognized. - Usage: "ua fix CVE-yyyy-nnnn" or "ua fix USN-nnnn" - """ - When I run `apt install -y libawl-php` with sudo - And I verify that running `ua fix USN-4539-1` `as non-root` exits `1` - Then stdout matches regexp: - """ - USN-4539-1: AWL vulnerability - Found CVEs: - https://ubuntu.com/security/CVE-2020-11728 - 1 affected source package is installed: awl - \(1/1\) awl: - Ubuntu security engineers are investigating this issue. - 1 package is still affected: awl - .*✘.* USN-4539-1 is not resolved. - """ - When I run `ua fix CVE-2020-28196` as non-root - Then stdout matches regexp: - """ - CVE-2020-28196: Kerberos vulnerability - https://ubuntu.com/security/CVE-2020-28196 - 1 affected source package is installed: krb5 - \(1/1\) krb5: - A fix is available in Ubuntu standard updates. - The update is already installed. - .*✔.* CVE-2020-28196 is resolved. - """ - When I run `apt-get install xterm=330-1ubuntu2 -y` with sudo - And I verify that running `ua fix CVE-2021-27135` `as non-root` exits `1` - Then stdout matches regexp: - """ - CVE-2021-27135: xterm vulnerability - https://ubuntu.com/security/CVE-2021-27135 - 1 affected source package is installed: xterm - \(1/1\) xterm: - A fix is available in Ubuntu standard updates. - Package fixes cannot be installed. - To install them, run this command as root \(try using sudo\) - 1 package is still affected: xterm - .*✘.* CVE-2021-27135 is not resolved. - """ - When I run `ua fix CVE-2021-27135` with sudo - Then stdout matches regexp: - """ - CVE-2021-27135: xterm vulnerability - https://ubuntu.com/security/CVE-2021-27135 - 1 affected source package is installed: xterm - \(1/1\) xterm: - A fix is available in Ubuntu standard updates. - .*\{ apt update && apt install --only-upgrade -y xterm \}.* - .*✔.* CVE-2021-27135 is resolved. - """ - When I run `ua fix CVE-2021-27135` with sudo - Then stdout matches regexp: - """ - CVE-2021-27135: xterm vulnerability - https://ubuntu.com/security/CVE-2021-27135 - 1 affected source package is installed: xterm - \(1/1\) xterm: - A fix is available in Ubuntu standard updates. - The update is already installed. - .*✔.* CVE-2021-27135 is resolved. - """ - When I run `apt-get install libbz2-1.0=1.0.6-8.1 -y --allow-downgrades` with sudo - And I run `apt-get install bzip2=1.0.6-8.1 -y` with sudo - And I run `ua fix USN-4038-3` with sudo - Then stdout matches regexp: - """ - USN-4038-3: bzip2 regression - Found Launchpad bugs: - https://launchpad.net/bugs/1834494 - 1 affected source package is installed: bzip2 - \(1/1\) bzip2: - A fix is available in Ubuntu standard updates. - .*\{ apt update && apt install --only-upgrade -y bzip2 libbz2-1.0 \}.* - .*✔.* USN-4038-3 is resolved. - """ - + | jammy | yes | @series.all @uses.config.machine_type.lxd.container - Scenario Outline: Run collect-logs on an attached machine + Scenario Outline: Run collect-logs on an unattached machine Given a `` machine with ubuntu-advantage-tools installed When I run `python3 /usr/lib/ubuntu-advantage/timer.py` with sudo And I verify that running `ua collect-logs` `as non-root` exits `1` @@ -536,7 +190,7 @@ Then I verify that files exist matching `ua_logs.tar.gz` When I run `tar zxf ua_logs.tar.gz` as non-root Then I verify that files exist matching `logs/` - When I run `ls -1 logs/` as non-root + When I run `sh -c "ls -1 logs/ | sort -d"` as non-root Then stdout matches regexp: """ build.info @@ -547,16 +201,14 @@ systemd-timers.txt ua-auto-attach.path.txt-error ua-auto-attach.service.txt-error - ua-license-check.path.txt - ua-license-check.service.txt - ua-license-check.timer.txt + uaclient.conf ua-reboot-cmds.service.txt ua-status.json ua-timer.service.txt ua-timer.timer.txt - uaclient.conf - ubuntu-advantage-timer.log ubuntu-advantage.log + ubuntu-advantage.service.txt + ubuntu-advantage-timer.log """ Examples: ubuntu release | release | @@ -567,26 +219,73 @@ @series.all @uses.config.machine_type.lxd.container - Scenario Outline: Unattached enable fails in a ubuntu machine + Scenario Outline: Unattached enable/disable fails in a ubuntu machine Given a `` machine with ubuntu-advantage-tools installed - When I verify that running `ua enable esm-infra` `with sudo` exits `1` + When I verify that running `ua esm-infra` `as non-root` exits `1` + Then I will see the following on stderr: + """ + This command must be run as root (try using sudo). + """ + When I verify that running `ua esm-infra` `with sudo` exits `1` Then I will see the following on stderr: """ To use 'esm-infra' you need an Ubuntu Advantage subscription Personal and community subscriptions are available at no charge See https://ubuntu.com/advantage """ - When I verify that running `ua enable esm-infra --format json --assume-yes` `with sudo` exits `1` + When I verify that running `ua esm-infra --format json --assume-yes` `with sudo` exits `1` + Then stdout is a json matching the `ua_operation` schema + And I will see the following on stdout: + """ + {"_schema_version": "0.1", "errors": [{"message": "To use 'esm-infra' you need an Ubuntu Advantage subscription\nPersonal and community subscriptions are available at no charge\nSee https://ubuntu.com/advantage", "message_code": "valid-service-failure-unattached", "service": null, "type": "system"}], "failed_services": [], "needs_reboot": false, "processed_services": [], "result": "failure", "warnings": []} + """ + When I verify that running `ua unknown` `as non-root` exits `1` + Then I will see the following on stderr: + """ + This command must be run as root (try using sudo). + """ + When I verify that running `ua unknown` `with sudo` exits `1` + Then I will see the following on stderr: + """ + Cannot unknown service 'unknown'. + See https://ubuntu.com/advantage + """ + When I verify that running `ua unknown --format json --assume-yes` `with sudo` exits `1` Then stdout is a json matching the `ua_operation` schema And I will see the following on stdout: """ - {"_schema_version": "0.1", "errors": [{"message": "To use 'esm-infra' you need an Ubuntu Advantage subscription\nPersonal and community subscriptions are available at no charge\nSee https://ubuntu.com/advantage", "message_code": "enable-failure-unattached", "service": null, "type": "system"}], "failed_services": [], "needs_reboot": false, "processed_services": [], "result": "failure", "warnings": []} + {"_schema_version": "0.1", "errors": [{"message": "Cannot unknown service 'unknown'.\nSee https://ubuntu.com/advantage", "message_code": "invalid-service-or-failure", "service": null, "type": "system"}], "failed_services": [], "needs_reboot": false, "processed_services": [], "result": "failure", "warnings": []} + """ + When I verify that running `ua esm-infra unknown` `as non-root` exits `1` + Then I will see the following on stderr: + """ + This command must be run as root (try using sudo). + """ + When I verify that running `ua esm-infra unknown` `with sudo` exits `1` + Then I will see the following on stderr: + """ + Cannot unknown service 'unknown'. + + To use 'esm-infra' you need an Ubuntu Advantage subscription + Personal and community subscriptions are available at no charge + See https://ubuntu.com/advantage + """ + When I verify that running `ua esm-infra unknown --format json --assume-yes` `with sudo` exits `1` + Then stdout is a json matching the `ua_operation` schema + And I will see the following on stdout: + """ + {"_schema_version": "0.1", "errors": [{"message": "Cannot unknown service 'unknown'.\n\nTo use 'esm-infra' you need an Ubuntu Advantage subscription\nPersonal and community subscriptions are available at no charge\nSee https://ubuntu.com/advantage", "message_code": "mixed-services-failure-unattached", "service": null, "type": "system"}], "failed_services": [], "needs_reboot": false, "processed_services": [], "result": "failure", "warnings": []} """ Examples: ubuntu release - | release | - | xenial | - | bionic | - | focal | - | impish | - | jammy | + | release | command | + | xenial | enable | + | xenial | disable | + | bionic | enable | + | bionic | disable | + | focal | enable | + | focal | disable | + | impish | enable | + | impish | disable | + | jammy | enable | + | jammy | disable | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/unattached_status.feature ubuntu-advantage-tools-27.9~16.04.1/features/unattached_status.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/unattached_status.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/unattached_status.feature 2022-05-18 19:44:15.000000000 +0000 @@ -143,7 +143,8 @@ | bionic | yes | yes | cis | yes | yes | yes | yes | yes | | no | | focal | yes | no | | yes | yes | yes | no | yes | usg | no | | impish | no | no | cis | no | no | no | no | no | | no | - | jammy | no | no | cis | no | no | no | no | no | | yes | + # jammy livepatch is only no when running the container test on a pre-jammy ubuntu + | jammy | yes | no | cis | no | no | yes | no | no | | yes | @series.all @uses.config.machine_type.lxd.container @@ -208,7 +209,8 @@ | bionic | yes | yes | cis | yes | yes | yes | yes | yes | | no | | focal | yes | no | | yes | yes | yes | no | yes | usg | no | | impish | no | no | cis | no | no | no | no | no | | no | - | jammy | no | no | cis | no | no | no | no | no | | yes | + # jammy livepatch is only no when running the container test on a pre-jammy ubuntu + | jammy | yes | no | cis | no | no | yes | no | no | | yes | @series.all diff -Nru ubuntu-advantage-tools-27.8~16.04.1/features/_version.feature ubuntu-advantage-tools-27.9~16.04.1/features/_version.feature --- ubuntu-advantage-tools-27.8~16.04.1/features/_version.feature 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/features/_version.feature 2022-05-18 19:44:15.000000000 +0000 @@ -12,6 +12,7 @@ @uses.config.machine_type.azure.pro.fips @uses.config.machine_type.gcp.generic @uses.config.machine_type.gcp.pro + @uses.config.machine_type.gcp.pro.fips Scenario Outline: Check ua version Given a `` machine with ubuntu-advantage-tools installed When I run `dpkg-query --showformat='${Version}' --show ubuntu-advantage-tools` with sudo @@ -60,3 +61,4 @@ | bionic | | focal | | impish | + | jammy | diff -Nru ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/check-secrets-available.yaml ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/check-secrets-available.yaml --- ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/check-secrets-available.yaml 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/check-secrets-available.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,25 @@ +--- + +name: Check secrets + +on: + workflow_call: + secrets: + SECRET_TO_CHECK: + required: true + outputs: + has-secrets: + value: ${{ jobs.check-secret-via-env.outputs.has-secrets }} + +jobs: + check-secret-via-env: + name: Check secret + runs-on: ubuntu-latest + outputs: + has-secrets: ${{ steps.has-secrets-check.outputs.has-secrets }} + steps: + - id: has-secrets-check + env: + SECRET_TO_CHECK: '${{ secrets.SECRET_TO_CHECK }}' + if: ${{ env.SECRET_TO_CHECK != '' }} + run: echo "::set-output name=has-secrets::true" diff -Nru ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/ci-base.yaml ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/ci-base.yaml --- ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/ci-base.yaml 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/ci-base.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,43 @@ +--- + +name: CI (base) + +on: + push: + pull_request: + +defaults: + run: + shell: sh -ex {0} + +jobs: + lint-and-style: + name: Static Analysis + runs-on: ubuntu-latest + steps: + - name: Install dependencies + run: | + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy update + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy install tox + - name: Git checkout + uses: actions/checkout@v2 + - name: Linting and style + run: tox -e flake8 -e black -e isort + - name: mypy + run: tox -e mypy + unit-tests: + name: Unit + runs-on: ubuntu-18.04 + steps: + - name: Install dependencies + run: | + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy update + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy install python3-venv + - name: Git checkout + uses: actions/checkout@v2 + - name: Linting and style + run: | + python3 -m venv unit-test-venv + . unit-test-venv/bin/activate + pip install tox tox-pip-version + tox -e py3 -e py3-bionic -e py3-xenial diff -Nru ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/ci-integration.yaml ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/ci-integration.yaml --- ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/ci-integration.yaml 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/ci-integration.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,132 @@ +--- + +name: CI (integration) + +on: + pull_request: + +# Cancel any in-progress job or run +concurrency: + group: 'ci-${{ github.workflow }}-${{ github.ref }}' + cancel-in-progress: true + +defaults: + run: + shell: sh -ex {0} + +jobs: + check-secrets: + name: Check secrets + uses: ./.github/workflows/check-secrets-available.yaml + secrets: + SECRET_TO_CHECK: '${{ secrets.PYCLOUDLIB_CONFIG_CONTENTS }}' + package-builds: + name: Packaging + needs: check-secrets + if: ${{ needs.check-secrets.outputs.has-secrets == 'true' }} + runs-on: ubuntu-20.04 + strategy: + matrix: + release: ['xenial', 'bionic', 'focal', 'jammy'] + steps: + - name: Prepare build tools + env: + DEBFULLNAME: GitHub CI Auto Builder + DEBEMAIL: nobody@nowhere.invalid + run: | + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy update + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy install --no-install-recommends sbuild schroot ubuntu-dev-tools debootstrap git-buildpackage + sudo sbuild-adduser $USER + cp /usr/share/doc/sbuild/examples/example.sbuildrc /home/$USER/.sbuildrc + - name: Git checkout + uses: actions/checkout@v2 + - name: Build package + run: | + gbp dch --ignore-branch --snapshot --distribution=${{ matrix.release }} + dch --local=~${{ matrix.release }} "" + sg sbuild -c "mk-sbuild --skip-proposed ${{ matrix.release }}" + sg sbuild -c "sbuild --dist='${{ matrix.release }}' --resolve-alternatives --no-clean-source --nolog --verbose --no-run-lintian --build-dir='${{ runner.temp }}'" + mv ../*.deb '${{ runner.temp }}' # Workaround for Debbug: #990734, drop in Jammy + - name: Archive debs as artifacts + uses: actions/upload-artifact@v3 + with: + name: 'ci-debs-${{ matrix.release }}' + path: '${{ runner.temp }}/*.deb' + retention-days: 3 + integration-tests: + name: Integration + needs: package-builds + runs-on: ubuntu-20.04 + strategy: + # Disable fail-fast as these jobs are slow, so we want to extract + # as much information as possible from them. + fail-fast: false + matrix: + release: ['xenial', 'bionic', 'focal', 'jammy'] + platform: ['lxd'] + include: + - release: bionic + platform: awspro + - release: bionic + platform: gcppro + steps: + - name: Prepare test tools + run: | + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy update + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy install tox distro-info + sudo adduser $USER lxd + - name: Initialize LXD + if: matrix.platform == 'lxd' || matrix.platform == 'vm' + run: sudo lxd init --auto + - name: Git checkout + uses: actions/checkout@v2 + - name: Retieve debs + uses: actions/download-artifact@v3 + with: + name: 'ci-debs-${{ matrix.release }}' + path: '${{ runner.temp }}' + - name: Canonicalize deb filenames + working-directory: '${{ runner.temp }}' + run: | + ln -s ubuntu-advantage-tools*.deb ubuntu-advantage-tools-${{ matrix.release }}.deb + ln -s ubuntu-advantage-pro*.deb ubuntu-advantage-pro-${{ matrix.release }}.deb + - name: Behave + env: + PYCLOUDLIB_CONFIG_CONTENTS: '${{ secrets.PYCLOUDLIB_CONFIG_CONTENTS }}' + GOOGLE_APPLICATION_CREDENTIALS_CONTENTS: '${{ secrets.GOOGLE_APPLICATION_CREDENTIALS_CONTENTS }}' + SSH_PRIVATE_KEY: '${{ secrets.SSH_PRIVATE_KEY }}' + SSH_PUBLIC_KEY: '${{ secrets.SSH_PUBLIC_KEY }}' + UACLIENT_BEHAVE_DEBS_PATH: '${{ runner.temp }}' + UACLIENT_BEHAVE_ARTIFACT_DIR: '${{ runner.temp }}/artifacts/behave-${{ matrix.platform }}-${{ matrix.release }}' + UACLIENT_BEHAVE_SNAPSHOT_STRATEGY: '1' + UACLIENT_BEHAVE_BUILD_PR: '1' + UACLIENT_BEHAVE_CONTRACT_TOKEN: '${{ secrets.UACLIENT_BEHAVE_CONTRACT_TOKEN }}' + UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING: '${{ secrets.UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING }}' + UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING_EXPIRED: '${{ secrets.UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING_EXPIRED }}' + run: | + PYCLOUDLIB_CONFIG="$(mktemp --tmpdir="${{ runner.temp }}" pycloudlib.toml.XXXXXXXXXX)" + GOOGLE_APPLICATION_CREDENTIALS="$(mktemp --tmpdir="${{ runner.temp }}" gcloud.json.XXXXXXXXXX)" + export PYCLOUDLIB_CONFIG + export GOOGLE_APPLICATION_CREDENTIALS + + # Dump secrets using a subshell to avoid leaks due to xtrace. + # Use printf as dash's echo always interpretes control sequences (e.g. \n). + sh -c 'printf "%s\n" "$PYCLOUDLIB_CONFIG_CONTENTS" > "$PYCLOUDLIB_CONFIG"' + sh -c 'printf "%s\n" "$GOOGLE_APPLICATION_CREDENTIALS_CONTENTS" > "$GOOGLE_APPLICATION_CREDENTIALS"' + + # SSH keys (should match what specified in pycloudlib.toml) + mkdir ~/.ssh + touch ~/.ssh/cloudinit_id_rsa + chmod 600 ~/.ssh/cloudinit_id_rsa + sh -c 'printf "%s\n" "$SSH_PRIVATE_KEY" > ~/.ssh/cloudinit_id_rsa' + sh -c 'printf "%s\n" "$SSH_PUBLIC_KEY" > ~/.ssh/cloudinit_id_rsa.pub' + + uversion=$(ubuntu-distro-info --series='${{ matrix.release }}' --release | cut -d' ' -f1) + sg lxd -c "tox -e 'behave-${{ matrix.platform }}-$uversion' -- --tags=-slow" + - name: Archive test artifacts + if: always() + uses: actions/upload-artifact@v3 + with: + name: 'ci-behave-${{ matrix.release }}' + path: '${{ runner.temp }}/artifacts/behave*' + retention-days: 7 diff -Nru ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/ci-workflows.yaml ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/ci-workflows.yaml --- ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/ci-workflows.yaml 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/ci-workflows.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,26 @@ +--- + +name: CI (workflows) + +on: + push: + pull_request: + +defaults: + run: + shell: sh -ex {0} + +jobs: + lint-and-style: + name: Static Analysis + runs-on: ubuntu-latest + steps: + - name: Install dependencies + run: | + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy update + sudo DEBIAN_FRONTEND=noninteractive apt-get -qy install yamllint + - name: Git checkout + uses: actions/checkout@v2 + - name: Linting and style + working-directory: .github/workflows + run: yamllint --strict . diff -Nru ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/cloud-cleanup.yaml ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/cloud-cleanup.yaml --- ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/cloud-cleanup.yaml 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/cloud-cleanup.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,56 @@ +--- + +name: Cloud Cleanup + +on: + schedule: + - cron: '42 2 * * *' + +defaults: + run: + shell: sh -ex {0} + +jobs: + check-secrets: + uses: ./.github/workflows/check-secrets-available.yaml + secrets: + # Use PYCLOUDLIB_CONFIG_CONTENTS as a flag for "secrets present". + SECRET_TO_CHECK: '${{ secrets.PYCLOUDLIB_CONFIG_CONTENTS }}' + cleanup-ec2: + name: Cleanup EC2 + needs: check-secrets + if: ${{ needs.check-secrets.outputs.has-secrets == 'true' }} + runs-on: ubuntu-latest + steps: + - name: Configure AWS credentials + uses: aws-actions/configure-aws-credentials@v1 + with: + aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} + aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + aws-region: us-east-2 + - name: Delete stale CI instances + run: | + yesterday=$(date --utc --iso-8601=seconds --date=yesterday) + current_aws_principal=$(aws sts get-caller-identity | jq -r '.UserId') + stale_instances=$( + aws ec2 describe-instances \ + --query "Reservations[].Instances[?LaunchTime<=\`$yesterday\`][].InstanceId" \ + --filters "Name=tag:PrincipalId,Values=$current_aws_principal" 'Name=tag:Name,Values=uaclient-ci-*' \ + --output text + ) + [ -z "$stale_instances" ] || aws ec2 terminate-instances --instance-ids $stale_instances + cleanup-gce: + name: Cleanup GCE + runs-on: ubuntu-latest + steps: + - name: Configure GCE credentials + uses: 'google-github-actions/auth@v0' + with: + credentials_json: '${{ secrets.GOOGLE_APPLICATION_CREDENTIALS_CONTENTS }}' + - name: 'Set up GCloud SDK' + uses: 'google-github-actions/setup-gcloud@v0' + - name: Delete stale CI instances + run: | + export CLOUDSDK_CORE_DISABLE_PROMPTS=1 + yesterday=$(date --utc --iso-8601=seconds --date=yesterday) + gcloud compute instances list --format="value(name,zone)" --filter="creationTimestamp<=$yesterday" --filter="name ~ .*uaclient-ci.*" | awk '{ system("gcloud compute instances delete --zone="$2 " " $1) }' diff -Nru ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/.yamllint ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/.yamllint --- ubuntu-advantage-tools-27.8~16.04.1/.github/workflows/.yamllint 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/.github/workflows/.yamllint 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,12 @@ +--- + +# Extend the default conf by adjusting some options. +extends: default + +rules: + indentation: + spaces: 2 + line-length: + max: 400 + truthy: + check-keys: false diff -Nru ubuntu-advantage-tools-27.8~16.04.1/integration-requirements.txt ubuntu-advantage-tools-27.9~16.04.1/integration-requirements.txt --- ubuntu-advantage-tools-27.8~16.04.1/integration-requirements.txt 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/integration-requirements.txt 2022-05-18 19:44:15.000000000 +0000 @@ -2,7 +2,8 @@ behave jsonschema PyHamcrest -pycloudlib @ git+https://github.com/canonical/pycloudlib.git@756a2c2de044ca60eaa7cdc76653d23a1339dc0a +pycloudlib @ git+https://github.com/canonical/pycloudlib.git@db36ef2dfc0a8d916c0e2c1794d7228d49de9192 +toml==0.10 # Simplestreams is not found on PyPi so pull from repo directly diff -Nru ubuntu-advantage-tools-27.8~16.04.1/Jenkinsfile ubuntu-advantage-tools-27.9~16.04.1/Jenkinsfile --- ubuntu-advantage-tools-27.8~16.04.1/Jenkinsfile 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/Jenkinsfile 1970-01-01 00:00:00.000000000 +0000 @@ -1,284 +0,0 @@ -pipeline { - agent any - - environment { - TMPDIR = "/tmp/$BUILD_TAG/" - UACLIENT_BEHAVE_JENKINS_BUILD_TAG = "${BUILD_TAG}" - UACLIENT_BEHAVE_JENKINS_CHANGE_ID = "${CHANGE_ID}" - UACLIENT_BEHAVE_BUILD_PR=1 - UACLIENT_BEHAVE_CONTRACT_TOKEN = credentials('ua-contract-token') - UACLIENT_BEHAVE_AWS_ACCESS_KEY_ID = credentials('ua-aws-access-key-id') - UACLIENT_BEHAVE_AWS_SECRET_ACCESS_KEY = credentials( - 'ua-aws-secret-access-key' - ) - UACLIENT_BEHAVE_AZ_CLIENT_ID = credentials('ua-azure-client-id') - UACLIENT_BEHAVE_AZ_CLIENT_SECRET = credentials( - 'ua-azure-client-secret' - ) - UACLIENT_BEHAVE_AZ_TENANT_ID = credentials('ua-azure-tenant') - UACLIENT_BEHAVE_AZ_SUBSCRIPTION_ID = credentials( - 'ua-azure-subscription-id' - ) - UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING = credentials( - 'ua-contract-token-staging' - ) - UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING_EXPIRED = credentials( - 'ua-contract-token-staging-expired' - ) - JOB_SUFFIX = sh(returnStdout: true, script: "basename ${JOB_NAME}| cut -d'-' -f2").trim() - } - - stages { - stage ('Setup Dependencies') { - steps { - deleteDir() - checkout scm - sh ''' - python3 -m venv $TMPDIR - . $TMPDIR/bin/activate - pip install tox # for tox supporting --parallel--safe-build - pip install tox-pip-version # To freeze pip version on some tests - ''' - } - } - stage("flake8") { - steps { - sh ''' - set +x - tox -e flake8 - ''' - } - } - stage("style") { - steps { - sh ''' - set +x - tox -e black -e isort - ''' - } - } - stage("mypy") { - steps { - sh ''' - set +x - tox -e mypy - ''' - } - } - stage ('Unit Tests') { - steps { - sh ''' - set +x - . $TMPDIR/bin/activate - tox --parallel--safe-build -e py3 - tox --parallel--safe-build -e py3-xenial - tox --parallel--safe-build -e py3-bionic - ''' - } - } - stage ('Package builds') { - parallel { - stage ('Package build: 16.04') { - environment { - BUILD_SERIES = "xenial" - SERIES_VERSION = "16.04" - PKG_VERSION = sh(returnStdout: true, script: "dpkg-parsechangelog --show-field Version").trim() - NEW_PKG_VERSION = "${PKG_VERSION}~${SERIES_VERSION}~${JOB_SUFFIX}" - ARTIFACT_DIR = "${TMPDIR}${BUILD_SERIES}" - } - steps { - sh ''' - set -x - mkdir ${ARTIFACT_DIR} - cp debian/changelog ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - sed -i "s/${PKG_VERSION}/${NEW_PKG_VERSION}/" ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - dpkg-source -l${WORKSPACE}/debian/changelog-${SERIES_VERSION} -b . - sbuild --resolve-alternatives --nolog --verbose --dist=${BUILD_SERIES} --no-run-lintian --append-to-version=~${SERIES_VERSION} ../ubuntu-advantage-tools*${NEW_PKG_VERSION}*dsc - cp ./ubuntu-advantage-tools*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-tools-${BUILD_SERIES}.deb - cp ./ubuntu-advantage-pro*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-pro-${BUILD_SERIES}.deb - ''' - } - } - stage ('Package build: 18.04') { - environment { - BUILD_SERIES = "bionic" - SERIES_VERSION = "18.04" - PKG_VERSION = sh(returnStdout: true, script: "dpkg-parsechangelog --show-field Version").trim() - NEW_PKG_VERSION = "${PKG_VERSION}~${SERIES_VERSION}~${JOB_SUFFIX}" - ARTIFACT_DIR = "${TMPDIR}${BUILD_SERIES}" - } - steps { - sh ''' - set -x - mkdir ${ARTIFACT_DIR} - cp debian/changelog ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - sed -i "s/${PKG_VERSION}/${NEW_PKG_VERSION}/" ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - dpkg-source -l${WORKSPACE}/debian/changelog-${SERIES_VERSION} -b . - sbuild --resolve-alternatives --nolog --verbose --dist=${BUILD_SERIES} --no-run-lintian --append-to-version=~${SERIES_VERSION} ../ubuntu-advantage-tools*${NEW_PKG_VERSION}*dsc - cp ./ubuntu-advantage-tools*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-tools-${BUILD_SERIES}.deb - cp ./ubuntu-advantage-pro*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-pro-${BUILD_SERIES}.deb - ''' - } - } - stage ('Package build: 20.04') { - environment { - BUILD_SERIES = "focal" - SERIES_VERSION = "20.04" - PKG_VERSION = sh(returnStdout: true, script: "dpkg-parsechangelog --show-field Version").trim() - NEW_PKG_VERSION = "${PKG_VERSION}~${SERIES_VERSION}~${JOB_SUFFIX}" - ARTIFACT_DIR = "${TMPDIR}${BUILD_SERIES}" - } - steps { - sh ''' - set -x - mkdir ${ARTIFACT_DIR} - cp debian/changelog ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - sed -i "s/${PKG_VERSION}/${NEW_PKG_VERSION}/" ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - dpkg-source -l${WORKSPACE}/debian/changelog-${SERIES_VERSION} -b . - sbuild --resolve-alternatives --nolog --verbose --dist=${BUILD_SERIES} --no-run-lintian --append-to-version=~${SERIES_VERSION} ../ubuntu-advantage-tools*${NEW_PKG_VERSION}*dsc - cp ./ubuntu-advantage-tools*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-tools-${BUILD_SERIES}.deb - cp ./ubuntu-advantage-pro*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-pro-${BUILD_SERIES}.deb - ''' - } - } - stage ('Package build: 22.04') { - environment { - BUILD_SERIES = "jammy" - SERIES_VERSION = "22.04" - PKG_VERSION = sh(returnStdout: true, script: "dpkg-parsechangelog --show-field Version").trim() - NEW_PKG_VERSION = "${PKG_VERSION}~${SERIES_VERSION}~${JOB_SUFFIX}" - ARTIFACT_DIR = "${TMPDIR}${BUILD_SERIES}" - } - steps { - sh ''' - set -x - mkdir ${ARTIFACT_DIR} - cp debian/changelog ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - sed -i "s/${PKG_VERSION}/${NEW_PKG_VERSION}/" ${WORKSPACE}/debian/changelog-${SERIES_VERSION} - dpkg-source -l${WORKSPACE}/debian/changelog-${SERIES_VERSION} -b . - sbuild --resolve-alternatives --nolog --verbose --dist=${BUILD_SERIES} --no-run-lintian --append-to-version=~${SERIES_VERSION} ../ubuntu-advantage-tools*${NEW_PKG_VERSION}*dsc - cp ./ubuntu-advantage-tools*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-tools-${BUILD_SERIES}.deb - cp ./ubuntu-advantage-pro*${SERIES_VERSION}*.deb ${ARTIFACT_DIR}/ubuntu-advantage-pro-${BUILD_SERIES}.deb - ''' - } - } - } - } - stage ('Integration Tests') { - parallel { - stage("lxc 16.04") { - environment { - UACLIENT_BEHAVE_DEBS_PATH = "${TMPDIR}xenial/" - UACLIENT_BEHAVE_ARTIFACT_DIR = "artifacts/behave-lxd-16.04" - UACLIENT_BEHAVE_EPHEMERAL_INSTANCE = 1 - UACLIENT_BEHAVE_SNAPSHOT_STRATEGY = 1 - } - steps { - sh ''' - set +x - . $TMPDIR/bin/activate - tox --parallel--safe-build -e behave-lxd-16.04 -- --tags="~slow" - ''' - } - } - stage("lxc 18.04") { - environment { - UACLIENT_BEHAVE_DEBS_PATH = "${TMPDIR}bionic/" - UACLIENT_BEHAVE_ARTIFACT_DIR = "artifacts/behave-lxd-18.04" - UACLIENT_BEHAVE_EPHEMERAL_INSTANCE = 1 - UACLIENT_BEHAVE_SNAPSHOT_STRATEGY = 1 - } - steps { - sh ''' - set +x - . $TMPDIR/bin/activate - tox --parallel--safe-build -e behave-lxd-18.04 -- --tags="~slow" - ''' - } - } - stage("lxc 20.04") { - environment { - UACLIENT_BEHAVE_DEBS_PATH = "${TMPDIR}focal/" - UACLIENT_BEHAVE_ARTIFACT_DIR = "artifacts/behave-lxd-20.04" - UACLIENT_BEHAVE_EPHEMERAL_INSTANCE = 1 - UACLIENT_BEHAVE_SNAPSHOT_STRATEGY = 1 - } - steps { - sh ''' - set +x - . $TMPDIR/bin/activate - tox --parallel--safe-build -e behave-lxd-20.04 -- --tags="~slow" - ''' - } - } - stage("lxc 22.04") { - environment { - UACLIENT_BEHAVE_DEBS_PATH = "${TMPDIR}jammy/" - UACLIENT_BEHAVE_ARTIFACT_DIR = "artifacts/behave-lxd-22.04" - UACLIENT_BEHAVE_EPHEMERAL_INSTANCE = 1 - UACLIENT_BEHAVE_SNAPSHOT_STRATEGY = 1 - } - steps { - sh ''' - set +x - . $TMPDIR/bin/activate - tox --parallel--safe-build -e behave-lxd-22.04 -- --tags="~slow" - ''' - } - } - stage("lxc vm 20.04") { - environment { - UACLIENT_BEHAVE_DEBS_PATH = "${TMPDIR}focal/" - UACLIENT_BEHAVE_ARTIFACT_DIR = "artifacts/behave-vm-20.04" - UACLIENT_BEHAVE_EPHEMERAL_INSTANCE = 1 - } - steps { - sh ''' - set +x - . $TMPDIR/bin/activate - tox --parallel--safe-build -e behave-vm-20.04 -- --tags="~slow" - ''' - } - } - stage("awspro 18.04") { - environment { - UACLIENT_BEHAVE_DEBS_PATH = "${TMPDIR}bionic/" - UACLIENT_BEHAVE_ARTIFACT_DIR = "artifacts/behave-awspro-18.04" - } - steps { - sh ''' - set +x - . $TMPDIR/bin/activate - tox --parallel--safe-build -e behave-awspro-18.04 -- --tags="~slow" - ''' - } - } - } - } - } - post { - always { - script { - try { - sh ''' - set +x - DATE=`date -d 'now+1day' +%m/%d/%Y` - git clone https://github.com/canonical/server-test-scripts.git - python3 server-test-scripts/ubuntu-advantage-client/lxd_cleanup.py --prefix ubuntu-behave-test-$CHANGE_ID --before-date $DATE || true - ''' - - junit "pytest_results.xml" - junit "reports/*.xml" - } catch (Exception e) { - echo e.toString() - currentBuild.result = 'UNSTABLE' - } - try { - archiveArtifacts "artifacts/**/**/*" - } catch (Exception e) { - echo "No integration test artifacts found. Presume success." - } - } - } - } -} diff -Nru ubuntu-advantage-tools-27.8~16.04.1/lib/cloud-id-shim.sh ubuntu-advantage-tools-27.9~16.04.1/lib/cloud-id-shim.sh --- ubuntu-advantage-tools-27.8~16.04.1/lib/cloud-id-shim.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/lib/cloud-id-shim.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,24 @@ +#!/bin/sh + +RUN_CLOUD_INIT=/run/cloud-init +if [ ! -d "$RUN_CLOUD_INIT" ]; then + exit 0 +fi + +CLOUD_ID_LINK_PATH=$RUN_CLOUD_INIT/cloud-id +if [ -L "$CLOUD_ID_LINK_PATH" ]; then + exit 0 +fi + +CLOUD_ID=$(cloud-id 2>/dev/null) || CLOUD_ID="" +if [ -z "$CLOUD_ID" ]; then + exit 0 +fi + +CLOUD_ID_FILE_PATH=$RUN_CLOUD_INIT/cloud-id-$CLOUD_ID + +echo "$CLOUD_ID" > "$CLOUD_ID_FILE_PATH" +ln -s "$CLOUD_ID_FILE_PATH" "$CLOUD_ID_LINK_PATH" +echo "Created symlink $CLOUD_ID_LINK_PATH -> $CLOUD_ID_FILE_PATH." >&2 + +exit 0 diff -Nru ubuntu-advantage-tools-27.8~16.04.1/lib/daemon.py ubuntu-advantage-tools-27.9~16.04.1/lib/daemon.py --- ubuntu-advantage-tools-27.8~16.04.1/lib/daemon.py 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/lib/daemon.py 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,60 @@ +import logging +import sys + +from systemd.daemon import notify # type: ignore + +from uaclient import daemon +from uaclient.config import UAConfig +from uaclient.defaults import DEFAULT_LOG_FORMAT + +LOG = logging.getLogger("ua") + + +def setup_logging(console_level, log_level, log_file, logger): + logger.setLevel(log_level) + + logger.handlers = [] + + console_handler = logging.StreamHandler(sys.stderr) + console_handler.setFormatter(logging.Formatter("%(message)s")) + console_handler.setLevel(console_level) + console_handler.set_name("ua-console") + logger.addHandler(console_handler) + + file_handler = logging.FileHandler(log_file) + file_handler.setLevel(log_level) + file_handler.setFormatter(logging.Formatter(DEFAULT_LOG_FORMAT)) + file_handler.set_name("ua-file") + logger.addHandler(file_handler) + + +def main() -> int: + + cfg = UAConfig() + setup_logging( + logging.INFO, logging.DEBUG, log_file=cfg.daemon_log_file, logger=LOG + ) + # The ua-daemon logger should log everything to its file + # Make sure the ua-daemon logger does not generate double logging + # by propagating to the root logger + LOG.propagate = False + # The root logger should only log errors to the daemon log file + setup_logging( + logging.CRITICAL, + logging.ERROR, + log_file=cfg.daemon_log_file, + logger=logging.getLogger(), + ) + + LOG.debug("daemon starting") + + notify("READY=1") + + daemon.poll_for_pro_license(cfg) + + LOG.debug("daemon ending") + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/lib/license_check.py ubuntu-advantage-tools-27.9~16.04.1/lib/license_check.py --- ubuntu-advantage-tools-27.8~16.04.1/lib/license_check.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/lib/license_check.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,28 +0,0 @@ -""" -Try to auto-attach in a GCP instance. This should only work -if the instance has a new UA license attached to it -""" -import logging - -from uaclient.cli import setup_logging -from uaclient.config import UAConfig -from uaclient.jobs.license_check import gcp_auto_attach - -LOG = logging.getLogger("ua_lib.license_check") - -if __name__ == "__main__": - cfg = UAConfig() - # The ua-license-check logger should log everything to its file - setup_logging( - logging.CRITICAL, - logging.DEBUG, - log_file=cfg.license_check_log_file, - logger=LOG, - ) - # Make sure the ua-license-check logger does not generate double logging - LOG.propagate = False - # The root logger should log any error to the timer log file - setup_logging( - logging.CRITICAL, logging.ERROR, log_file=cfg.license_check_log_file - ) - gcp_auto_attach(cfg=cfg) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/lib/reboot_cmds.py ubuntu-advantage-tools-27.9~16.04.1/lib/reboot_cmds.py --- ubuntu-advantage-tools-27.8~16.04.1/lib/reboot_cmds.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/lib/reboot_cmds.py 2022-05-18 19:44:15.000000000 +0000 @@ -83,6 +83,7 @@ ) sys.exit(1) cfg.remove_notice("", messages.FIPS_SYSTEM_REBOOT_REQUIRED.msg) + cfg.remove_notice("", messages.FIPS_REBOOT_REQUIRED_MSG) def refresh_contract(cfg): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/lib/upgrade_lts_contract.py ubuntu-advantage-tools-27.9~16.04.1/lib/upgrade_lts_contract.py --- ubuntu-advantage-tools-27.8~16.04.1/lib/upgrade_lts_contract.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/lib/upgrade_lts_contract.py 2022-05-18 19:44:15.000000000 +0000 @@ -53,7 +53,8 @@ def process_contract_delta_after_apt_lock() -> None: logging.debug("Check whether to upgrade-lts-contract") - if not UAConfig().is_attached: + cfg = UAConfig() + if not cfg.is_attached: logging.debug("Skipping upgrade-lts-contract. Machine is unattached") return out, _err = subp(["lsof", "/var/lib/apt/lists/lock"], rcs=[0, 1]) @@ -74,12 +75,6 @@ logging.warning(msg) sys.exit(1) - if current_release == "trusty": - msg = "Unable to execute upgrade-lts-contract.py on trusty" - print(msg) - logging.warning(msg) - sys.exit(1) - past_release = current_codename_to_past_codename.get(current_release) if past_release is None: msg = "Could not find past release for: {}".format(current_release) @@ -104,6 +99,7 @@ logging.debug(msg) process_entitlements_delta( + cfg=cfg, past_entitlements=past_entitlements, new_entitlements=new_entitlements, allow_enable=True, diff -Nru ubuntu-advantage-tools-27.8~16.04.1/Makefile ubuntu-advantage-tools-27.9~16.04.1/Makefile --- ubuntu-advantage-tools-27.8~16.04.1/Makefile 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/Makefile 2022-05-18 19:44:15.000000000 +0000 @@ -6,6 +6,7 @@ clean: rm -f *.build *.buildinfo *.changes .coverage *.deb *.dsc *.tar.gz *.tar.xz + rm -f azure-*-uaclient-ci-* ec2-uaclient-ci-* gcp-*-uaclient-ci-* lxd-container-*-uaclient-ci-* lxd-virtual-machine-*-uaclient-ci-* rm -rf *.egg-info/ .tox/ .cache/ .mypy_cache/ find . -type f -name '*.pyc' -delete find . -type d -name '*__pycache__' -delete @@ -27,10 +28,6 @@ @tox testdeps: -ifneq (,$(findstring trusty,$(TOXENV))) - @echo Pinning virtualenv to 20.0.31 on trusty because 32 breaks py3.4 - pip install virtualenv==20.0.31 -endif pip install tox pip install tox-pip-version pip install tox-setuptools-version diff -Nru ubuntu-advantage-tools-27.8~16.04.1/.pre-commit-config.yaml ubuntu-advantage-tools-27.9~16.04.1/.pre-commit-config.yaml --- ubuntu-advantage-tools-27.8~16.04.1/.pre-commit-config.yaml 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/.pre-commit-config.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -1,6 +1,6 @@ repos: - repo: https://github.com/ambv/black - rev: 19.3b0 # Also stored in dev-requirements.txt; update both together! + rev: 22.3.0 # Also stored in dev-requirements.txt; update both together! hooks: - id: black - repo: https://github.com/pycqa/isort diff -Nru ubuntu-advantage-tools-27.8~16.04.1/README.md ubuntu-advantage-tools-27.9~16.04.1/README.md --- ubuntu-advantage-tools-27.8~16.04.1/README.md 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/README.md 2022-05-18 19:44:15.000000000 +0000 @@ -1,17 +1,28 @@ -# Ubuntu Advantage Client +

+ + + +
+ Ubuntu Advantage Client +

-The Ubuntu Advantage client provides users with a simple mechanism to +###### Clean and Consistent CLI for your Ubuntu Advantage Systems + +![Latest Version](https://img.shields.io/github/v/tag/canonical/ubuntu-advantage-client.svg?label=Latest%20Version) +![CI](https://github.com/canonical/ubuntu-advantage-client/actions/workflows/ci-base.yaml/badge.svg?branch=main) + +The Ubuntu Advantage (UA) Client provides users with a simple mechanism to view, enable, and disable offerings from Canonical on their system. The following entitlements are supported: -- [Common Criteria EAL2 certification artifacts provisioning](https://ubuntu.com/cc-eal) -- [Canonical CIS Benchmark Audit Tool](https://ubuntu.com/cis-audit) -- [Ubuntu Extended Security Maintenance](https://ubuntu.com/esm) -- [Robot Operating System Extended Security Maintenance](https://ubuntu.com/robotics/ros-esm) -- [FIPS 140-2 Certified Modules](https://ubuntu.com/fips) -- [FIPS 140-2 Non-Certified Module Updates](https://ubuntu.com/fips) -- [Livepatch Service](https://www.ubuntu.com/livepatch) - +- [Common Criteria EAL2 Certification Tooling](https://ubuntu.com/security/cc) +- [CIS Benchmark Audit Tooling](https://ubuntu.com/security/cis) +- [Ubuntu Security Guide (USG) Tooling](https://ubuntu.com/security/certifications/docs/usg) +- [Ubuntu Extended Security Maintenance (ESM)](https://ubuntu.com/security/esm) +- [Robot Operating System (ROS) Extended Security Maintenance](https://ubuntu.com/robotics/ros-esm) +- [FIPS 140-2 Certified Modules](https://ubuntu.com/security/fips) +- [FIPS 140-2 Non-Certified Module Updates](https://ubuntu.com/security/fips) +- [Livepatch Service](https://ubuntu.com/security/livepatch) ## Obtaining the Client @@ -20,24 +31,6 @@ will also contain `ubuntu-advantage-pro` which automates machine attach on boot for custom AWS, Azure and GCP images. -### Support Matrix for the client -Ubuntu Advantage services are only available on Ubuntu Long Term Support (LTS) releases. -On interim Ubuntu releases, `ua status` will report most of the services as 'n/a' and disallow enabling those services. - -Below is a list of platforms and releases ubuntu-advantage-tools supports - -| Ubuntu Release | Build Architectures | Support Level | -| -------- | -------- | -------- | -| Trusty | amd64, arm64, armhf, i386, powerpc, ppc64el | Last release 19.6 | -| Xenial | amd64, arm64, armhf, i386, powerpc, ppc64el, s390x | Active SRU of all features | -| Bionic | amd64, arm64, armhf, i386, ppc64el, s390x | Active SRU of all features | -| Focal | amd64, arm64, armhf, ppc64el, riscv64, s390x | Active SRU of all features | -| Groovy | amd64, arm64, armhf, ppc64el, riscv64, s390x | Last release 27.1 | -| Hirsute | amd64, arm64, armhf, ppc64el, riscv64, s390x | Last release 27.5 | -| Impish | amd64, arm64, armhf, ppc64el, riscv64, s390x | Active SRU of all features | - -Note: ppc64el will not have support for APT JSON hook messaging due to insufficient golang packages - Ubuntu Pro images are available on the following cloud platforms on all Ubuntu LTS releases (Xenial, Bionic, Focal): 1. AWS: [Ubuntu PRO](https://ubuntu.com/aws/pro) and [Ubuntu PRO FIPS](https://ubuntu.com/aws/fips) 2. Azure: [Ubuntu PRO](https://ubuntu.com/azure/pro) and [Ubuntu PRO FIPS](https://ubuntu.com/azure/fips) @@ -54,558 +47,26 @@ Users can manually run the `ua` command to learn more or view the manpage. -## Terminology - The following vocabulary is used to describe different aspects of the work -Ubuntu Advantage Client performs: - -| Term | Meaning | -| -------- | -------- | -| UA Client | The python command line client represented in this ubuntu-advantage-client repository. It is installed on each Ubuntu machine and is the entry-point to enable any Ubuntu Advantage commercial service on an Ubuntu machine. | -| Contract Server | The backend service exposing a REST API to which UA Client authenticates in order to obtain contract and commercial service information and manage which support services are active on a machine.| -| Entitlement/Service | An Ubuntu Advantage commercial support service such as FIPS, ESM, Livepatch, CIS-Audit to which a contract may be entitled | -| Affordance | Service-specific list of applicable architectures and Ubuntu series on which a service can run | -| Directives | Service-specific configuration values which are applied to a service when enabling that service | -| Obligations | Service-specific policies that must be instrumented for support of a service. Example: `enableByDefault: true` means that any attached machine **MUST** enable a service on attach | - - -## Architecture -Ubuntu Advantage client, hereafter "UA client", is a python3-based command line -utility. It provides a CLI to attach, detach, enable, -disable and check status of support related services. - -The package `ubuntu-advantage-tools` also provides a C++ APT hook which helps -advertise ESM service and available packages in MOTD and during various apt -commands. - -The `ubuntu-advantage-pro` package delivers auto-attach auto-enable -functionality via init scripts and systemd services for various cloud -platforms. - -By default, Ubuntu machines are deployed in an unattached state. A machine can -get manually or automatically attached to a specific contract by interacting -with the Contract Server REST API. Any change in state of services or machine -attach results in additional interactions with the Contract Server API to -validate such operations. - -### Attaching a machine -Each Ubuntu SSO account holder has access to one or more contracts. To attach -a machine to an Ubuntu Advantage contract: - -* An Ubuntu SSO account holder must obtain a contract token from -https://ubuntu.com/advantage. -* Run `sudo ua attach ` on the machine - - Ubuntu Pro images for AWS, Azure and GCP perform an auto-attach without tokens -* UA Client reads config from /etc/ubuntu-advantage/uaclient.conf to obtain - the contract_url (default: https://contracts.canonical.com) -* UA Client POSTs to the Contract Server API @ - /api/v1/context/machines/token providing the \ -* The Contract Server responds with a JSON blob containing an unique machine - token, service credentials, affordances, directives and obligations to allow - enabling and disabling Ubuntu Advantage services -* UA client writes the machine token API response to the root-readonly - /var/lib/ubuntu-advantage/private/machine-token.json -* UA client auto-enables any services defined with - `obligations:{enableByDefault: true}` - -#### Attaching with --attach-config -Running `ua attach` with the `--attach-config` may be better suited to certain scenarios. - -When using `--attach-config` the token must be passed in the file rather than on the command line. This is useful in situations where it is preffered to keep the secret token in a file. - -Optionally, the attach config file can be used to override the services that are automatically enabled as a part of the attach process. - -An attach config file looks like this: -```yaml -token: YOUR_TOKEN_HERE # required -enable_services: # optional list of service names to auto-enable - - esm-infra - - esm-apps - - cis -``` - -And can be passed on the cli like this: -```shell -sudo ua attach --attach-config /path/to/file.yaml -``` - -### Enabling a service -Each service controlled by UA client will have a python module in -uaclient/entitlements/\*.py which handles setup and teardown of services when -enabled or disabled. - -If a contract entitles a machine to a service, `root` user can enable the -service with `ua enable `. If a service can be disabled -`ua disable ` will be permitted. - -The goal of the UA client is to remain simple and flexible and let the -contracts backend drive dynamic changes in contract offerings and constraints. -In pursuit of that goal, the UA client obtains most of it's service constraints -from a machine token that it obtains from the Contract Server API. - -The UA Client is simple in that it relies on the machine token on the attached -machine to describe whether a service is applicable for an environment and what -configuration is required to properly enable that service. - -Any interactions with the Contract server API are defined as UAContractClient -class methods in [uaclient/contract.py](uaclient/contract.py). - -### Using a proxy -The UA Client can be configured to use an http/https proxy as needed for network requests. -In addition, the UA Client will automatically set up proxies for all programs required for -enabling Ubuntu Advantage services. This includes APT, Snaps, and Livepatch. - -The proxy can be set to the config file under `ua config`. HTTP/HTTPS proxies are -set using `http_proxy` and `https_proxy`, respectively. APT proxies are defined -separately, using `apt_http_proxy` and `apt_https_proxy`. The proxy is identified -by a string formatted as: - -`://[:@]:` - -### Timer jobs -UA client sets up a systemd timer to run jobs that need to be executed recurrently. -The timer itself ticks every 5 minutes on average, and decides which jobs need -to be executed based on their _intervals_. - -Jobs are executed by the timer script if: -- The script has not yet run successfully, or -- Their interval since last successful run is already exceeded. - -There is a random delay applied to the timer, to desynchronize job execution time -on machines spun at the same time, avoiding multiple synchronized calls to the -same service. - -Current jobs being checked and executed are: - -| Job | Description | Interval | -| --- | ----------- | -------- | -| update_messaging | Update MOTD and APT messages | 6 hours | -| update_status | Update UA status | 12 hours | -| gcp_auto_attach | Try to auto-attach on a GCP instance | 5 minutes | - -- The `update_messaging` job makes sure that the MOTD and APT messages match the -available/enabled services on the system, showing information about available -packages or security updates. See [MOTD messages](#motd-messages). -- The `update_status` job makes sure the `ua status` command will have the latest -information even when executed by a non-root user, updating the -`/var/lib/ubuntu-advantage/status.json` file. -- The `gcp_auto_attach` job is only operable on Google Cloud Platform (GCP) generic -Ubuntu VMs without an active Ubuntu Advantage license. It polls GCP metadata every 5 -minutes to discover if a license has been attached to the VM through Google Cloud and -will perform `ua auto-attach` in that case. - -The timer intervals can be changed using the `ua config set` command. -```bash -# Make the update_status job run hourly -$ sudo ua config set update_status_timer=3600 -``` -Setting an interval to zero disables the job. -```bash -# Disable the update_status job -$ sudo ua config set update_status_timer=0 -``` - -## Directory layout -The following describes the intent of UA client related directories: - - -| File/Directory | Intent | -| -------- | -------- | -| ./tools | Helpful scripts used to publish, release or test various aspects of UA client | -| ./features/ | Behave BDD integration tests for UA Client -| ./uaclient/ | collection of python modules which will be packaged into ubuntu-advantage-tools package to deliver the UA Client CLI | -| uaclient.entitlements | Service-specific \*Entitlement class definitions which perform enable, disable, status, and entitlement operations etc. All classes derive from base.py:UAEntitlement and many derive from repo.py:RepoEntitlement | -| ./uaclient/cli.py | The entry-point for the command-line client -| ./uaclient/clouds/ | Cloud-platform detection logic used in Ubuntu Pro to determine if a given should be auto-attached to a contract | -| uaclient.contract | Module for interacting with the Contract Server API | -| ./demo | Various stale developer scripts for setting up one-off demo environments. (Not needed often) -| ./apt-hook/ | the C++ apt-hook delivering MOTD and apt command notifications about UA support services | -| ./apt-conf.d/ | apt config files delivered to /etc/apt/apt-conf.d to automatically allow unattended upgrades of ESM security-related components. If apt proxy settings are configured, an additional apt config file will be placed here to configure the apt proxy. | -| /etc/ubuntu-advantage/uaclient.conf | Configuration file for the UA client.| -| /var/lib/ubuntu-advantage/private | `root` read-only directory containing Contract API responses, machine-tokens and service credentials | -| /var/log/ubuntu-advantage.log | `root` read-only log of ubuntu-advantage operations | - - -## Collecting logs -The `ua collect-logs` command creates a tarball with all relevant data for debugging possible problems with UA. It puts together: -- The UA Client configuration file (the default is `/etc/ubuntu-advantage/uaclient.conf`) -- The UA Client log files (the default is `/var/log/ubuntu-advantage*`) -- The files in `/etc/apt/sources.list.d/*` related to UA -- Output of `systemctl status` for the UA Client related services -- Status of the timer jobs, `canonical-livepatch`, and the systemd timers -- Output of `cloud-id`, `dmesg` and `journalctl` - -Files with sensitive data are not included in the tarball. As of now, the command must be run as root. - -Running the command creates a `ua_logs.tar.gz` file in the current directory. -The output file path/name can be changed using the `-o` option. - -## Testing - -All unit and lint tests are run using `tox`. We also use `tox-pip-version` to specify an older pip version as a workaround: we have some required dependencies that can't meet the strict compatibility checks of current pip versions. - -First, install `tox` and `tox-pip-version` - you'll only have to do this once. - -```shell -make testdeps -``` - -Then you can run the unit and lint tests: - -```shell -tox -``` - -The client also includes built-in dep8 tests. These are run as follows: - -```shell -autopkgtest -U --shell-fail . -- lxd ubuntu:xenial -``` - -### Integration Tests - -ubuntu-advantage-client uses [behave](https://behave.readthedocs.io) -for its integration testing. - -The integration test definitions are stored in the `features/` -directory and consist of two parts: `.feature` files that define the -tests we want to run, and `.py` files which implement the underlying -logic for those tests. - -By default, integration tests will do the folowing on a given cloud platform: - * Launch an instance running latest daily image of the target Ubuntu release - * Add the Ubuntu advantage client daily build PPA: [ppa:ua-client/daily](https://code.launchpad.net/~ua-client/+archive/ubuntu/daily) - * Install the appropriate ubuntu-advantage-tools and ubuntu-advantage-pro deb - * Stop the instance and snapshot it creating an updated bootable image for - test runs - * Launch a fresh instance based on the boot-image created and exercise tests - -The testing can be overridden to run using a local copy of the ubuntu-advantage-client source code instead of the daily PPA by providing the following environment variable to the behave test runner: -```UACLIENT_BEHAVE_BUILD_PR=1``` - -> Note that, by default, we cache the source even when `UACLIENT_BEHAVE_BUILD_PR=1`. This means that if you change the python code locally and want to run the behave tests against your new version, you need to either delete the cache (`rm /tmp/pr_source.tar.gz`) or also set `UACLIENT_BEHAVE_CACHE_SOURCE=0`. - -To run the tests, you can use `tox`: - -```shell -tox -e behave-20.04 -``` - -or, if you just want to run a specific file, or a test within a file: - -```shell -tox -e behave-20.04 features/unattached_commands.feature -tox -e behave-20.04 features/unattached_commands.feature:55 -``` - -As can be seen, this will run behave tests only for release 20.04 (Focal Fossa). We are currently -supporting 4 distinct releases: - -* 20.04 (Focal Fossa) -* 18.04 (Bionic Beaver) -* 16.04 (Xenial Xerus) -* 14.04 (Trusty Tahr) - -Therefore, to change which release to run the behave tests against, just change the release version -on the behave command. - -Furthermore, when developing/debugging a new scenario: - - 1. Add a `@wip` tag decorator on the scenario - 2. To only run @wip scenarios run: `tox -e behave-20.04 -- -w` - 3. If you want to use a debugger: - 1. Add ipdb to integration-requirements.txt - 2. Add ipdb.set_trace() in the code block you wish to debug - -(If you're getting started with behave, we recommend at least reading -through [the behave -tutorial](https://behave.readthedocs.io/en/latest/tutorial.html) to get -an idea of how it works, and how tests are written.) - -#### Iterating Locally - -To make running the tests repeatedly less time-intensive, our behave -testing setup has support for reusing images between runs via two -configuration options (provided in environment variables), -`UACLIENT_BEHAVE_IMAGE_CLEAN` and `UACLIENT_BEHAVE_REUSE_IMAGE`. - -To avoid the test framework cleaning up the image it creates, you can -run it like this: - -```sh -UACLIENT_BEHAVE_IMAGE_CLEAN=0 tox -e behave -``` - -which will emit a line like this above the test summary: - -``` -Image cleanup disabled, not deleting: behave-image-1572443113978755 -``` - -You can then reuse that image by plugging its name into your next test -run, like so: - -```sh -UACLIENT_BEHAVE_REUSE_IMAGE=behave-image-1572443113978755 tox -e behave -``` - -If you've done this correctly, you should see something like -`reuse_image = behave-image-1572443113978755` in the "Config options" -output, and test execution should start immediately (without the usual -image build step). - -(Note that this handling is specific to our behave tests as it's -performed in `features/environment.py`, so don't expect to find -documentation about it outside of this codebase.) - -For development purposes there is `reuse_container` option. -If you would like to run behave tests in an existing container -you need to add `-D reuse_container=container_name`: - -```sh -tox -e behave -D reuse_container=container_name -``` - -#### Optimizing total run time of integration tests with snapshots -When `UACLIENT_BEHAVE_SNAPSHOT_STRATEGY=1` we create a snapshot of an instance -with ubuntu-advantage-tools installed and restore from that snapshot for all tests. -This adds an upfront cost that is amortized across several test scenarios. - -Based on some rough testing in July 2021, these are the situations -when you should set UACLIENT_BEHAVE_SNAPSHOT_STRATEGY=1 - -> At time of writing, starting a lxd.vm instance from a local snapshot takes -> longer than starting a fresh lxd.vm instance and installing ua. - -| machine_type | condition | -| ------------- | ------------------ | -| lxd.container | num_scenarios > 7 | -| lxd.vm | never | -| gcp | num_scenarios > 5 | -| azure | num_scenarios > 14 | -| aws | num_scenarios > 11 | - -#### Integration testing on EC2 -The following tox environments allow for testing focal on EC2: - -``` - # To test ubuntu-pro-images - tox -e behave-awspro-20.04 - # To test Canonical cloud images (non-ubuntu-pro) - tox -e behave-awsgeneric-20.04 -``` - -To run the test for a different release, just update the release version string. For example, -to run AWS pro xenial tests, you can run: - -``` -tox -e behave-awspro-16.04 -``` - -In order to run EC2 tests the following environment variables are required: - - UACLIENT_BEHAVE_AWS_ACCESS_KEY_ID - - UACLIENT_BEHAVE_AWS_SECRET_ACCESS_KEY - - -To specifically run non-ubuntu pro tests using canonical cloud-images an -additional token obtained from https://ubuntu.com/advantage needs to be set: - - UACLIENT_BEHAVE_CONTRACT_TOKEN= - -By default, the public AMIs for Ubuntu Pro testing used for each Ubuntu -release are defined in features/aws-ids.yaml. These ami-ids are determined by -running `./tools/refresh-aws-pro-ids`. - -Integration tests will read features/aws-ids.yaml to determine which default -AMI id to use for each supported Ubuntu release. - -To update `features/aws-ids.yaml`, run `./tools/refresh-aws-pro-ids` and put up -a pull request against this repo to updated that content from the ua-contracts -marketplace definitions. - -* To manually run EC2 integration tests using packages from `ppa:ua-client/daily` provide the following environment vars: - -```sh -UACLIENT_BEHAVE_AWS_ACCESS_KEY_ID= UACLIENT_BEHAVE_AWS_SECRET_KEY= tox -e behave-awspro-20.04 -``` - -* To manually run EC2 integration tests with a specific AMI Id provide the -following environment variable to launch your specfic AMI instead of building -a daily ubuntu-advantage-tools image. -```sh -UACLIENT_BEHAVE_REUSE_IMAGE=your-custom-ami tox -e behave-awspro-20.04 -``` - -#### Integration testing on Azure -The following tox environments allow for testing focal on Azure: - -``` - # To test ubuntu-pro-images - tox -e behave-azurepro-20.04 - # To test Canonical cloud images (non-ubuntu-pro) - tox -e behave-azuregeneric-20.04 -``` - -To run the test for a different release, just update the release version string. For example, -to run Azure pro xenial tests, you can run: - -``` -tox -e behave-azurepro-16.04 -``` - -In order to run Azure tests the following environment variables are required: - - UACLIENT_BEHAVE_AZ_CLIENT_ID - - UACLIENT_BEHAVE_AZ_CLIENT_SECRET - - UACLIENT_BEHAVE_AZ_SUBSCRIPTION_ID - - UACLIENT_BEHAVE_AZ_TENANT_ID - - -To specifically run non-ubuntu pro tests using canonical cloud-images an -additional token obtained from https://ubuntu.com/advantage needs to be set: - - UACLIENT_BEHAVE_CONTRACT_TOKEN= - -* To manually run Azure integration tests using packages from `ppa:ua-client/daily` provide the following environment vars: - -```sh -UACLIENT_BEHAVE_AZ_CLIENT_ID= UACLIENT_BEHAVE_AZ_CLIENT_SECRET= UACLIENT_BEHAVE_AZ_SUBSCRIPTION_ID= UACLIENT_BEHAVE_AZ_TENANT_ID= tox -e behave-azurepro-20.04 -``` - -* To manually run Azure integration tests with a specific Image Id provide the -following environment variable to launch your specfic Image Id instead of building -a daily ubuntu-advantage-tools image. -```sh -UACLIENT_BEHAVE_REUSE_IMAGE=your-custom-image-id tox -e behave-awspro-20.04 -``` - -### MOTD Messages - -Since ubuntu-advantage-tools is responsible for enabling ESM services, we advertise them on different -applications thorough the system, such as MOTD and apt commands like upgrade. - -To verify that the MOTD message is advertising the ESM packages, ensure that we have ESM source list -files in the system. If that is the case, please run the following commands to update the state of -MOTD and display the message: - -```sh -# Make sure ubuntu-advantage-tools version >= 27.0 -ua version -# Make apt aware of the ESM source files -sudo apt update -# Generates ubuntu-advantage-tools messages that should be delivered to MOTD -# This script is triggered by the systemd timer 4 times a day. To test it, we need -# to enforce that it was already executed. -sudo systemctl start ua-timer.service -# Force updating MOTD messages related to update-notifier -sudo rm /var/lib/ubuntu-advantage/jobs-status.json -sudo python3 /usr/lib/ubuntu-advantage/timer.py -# Update MOTD and display the message -run-parts /etc/update-motd.d/ -``` - -## Building - -Packages ubuntu-advantage-tools and ubuntu-advantage-pro are created from the -debian/control file in this repository. You can build the -packages the way you would normally build a Debian package: - - -```shell -dpkg-buildpackage -us -uc -``` - -**Note** It will build the packages with dependencies for the Ubuntu release on -which you are building, so it's best to build in a container or kvm for the -release you are targeting. - -OR, if you want to build for a target release other than the release -you're on: - -### using sbuild -[configure sbuild](https://wiki.ubuntu.com/SimpleSbuild) and -use that for the build: - -Setup some chroots for sbuild with this script -```shell -bash ./tools/setup_sbuild.sh -``` - -```shell -debuild -S -sbuild --dist= ../ubuntu-advantage-tools_*.dsc -# emulating different architectures in sbuild-launchpad-chroot -sbuild-launchpad-chroot create --architecture="riscv64" "--name=focal-riscv64" "--series=focal -``` - -> Note: Every so often, it is recommended to update your chroots. -> ```bash -> # to update a single chroot -> sudo sbuild-launchpad-chroot update -n ua-xenial-amd64 -> # this script can be used to update all chroots -> sudo PATTERN=\* sh /usr/share/doc/sbuild/examples/sbuild-debian-developer-setup-update-all -> ``` - -### Setting up an lxc development container -```shell -lxc launch ubuntu-daily:trusty dev-t -c user.user-data="$(cat tools/ua-dev-cloud-config.yaml)" -lxc exec dev-t bash -``` - -### Setting up a kvm development environment with multipass -**Note:** There is a sample procedure documented in tools/multipass.md as well. -```shell -multipass launch daily:focal -n dev-f --cloud-init tools/ua-dev-cloud-config.yaml -multipass connect dev-f -``` - -## Code Formatting - -The `ubuntu-advantage-client` code base is formatted using -[black](https://github.com/psf/black), and imports are sorted with -[isort](https://github.com/PyCQA/isort). When making changes, you -should ensure that your code is blackened and isorted, or it will -be rejected by CI. -Formatting the whole codebase is as simple as running: - -```shell -black uaclient/ -isort uaclient/ -``` - -To make it easier to avoid committing incorrectly formatted code, this -repo includes configuration for [pre-commit](https://pre-commit.com/) -which will stop you from committing any code that isn't blackened. To -install the project's pre-commit hook, install `pre-commit` and run: - -```shell -pre-commit install -``` - -(To install `black` and `pre-commit` at the appropriate versions for -the project, you should install them via `dev-requirements.txt`.) - -## Daily Builds - -On Launchpad, there is a [daily build recipe](https://code.launchpad.net/~canonical-server/+recipe/ua-client-daily), -which will build the client and place it in the [ua-client-daily PPA](https://code.launchpad.net/~ua-client/+archive/ubuntu/daily). - -## Remastering custom golden images based on Ubuntu PRO - -Vendors who wish to provide custom images based on Ubuntu PRO images can -follow the procedure below: - -* Launch the Ubuntu PRO golden image -* Customize your golden image as you see fit -* If `ua status` shows attached, remove the UA artifacts to allow clean - auto-attach on subsequent cloned VM launches -```bash -sudo ua detach -sudo rm -rf /var/log/ubuntu-advantage.log # to remove credentials and tokens from logs -``` -* Remove `cloud-init` first boot artifacts so the cloned VM boot is seen as a first boot -```bash -sudo cloud-init clean --logs -sudo shutdown -h now -``` -* Use your cloud platform to clone or snapshot this VM as a golden image +## User Documentation + +### Tutorials + +* [Create a FIPS compliant Ubuntu Docker image](./docs/tutorials/create_a_fips_docker_image.md) + +### How To Guides + +* [How to Configure Proxies](./docs/howtoguides/configure_proxies.md) +* [How to Enable Ubuntu Advantage Services in a Dockerfile](./docs/howtoguides/enable_ua_in_dockerfile.md) +* [How to Create a custom Golden Image based on Ubuntu Pro](./docs/howtoguides/create_pro_golden_image.md) +* [How to Manually update MOTD and APT messages](./docs/howtoguides/update_motd_messages.md) + +### Reference + +* [Ubuntu Release and Architecture Support Matrix](./docs/references/support_matrix.md) + +### Explanation +* [What is the daemon for? (And how to disable it)](./docs/explanations/what_is_the_daemon.md) -## Releasing ubuntu-advantage-tools -see [RELEASES.md](RELEASES.md) +## Contributing +See [CONTRIBUTING.md](CONTRIBUTING.md) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/RELEASES.md ubuntu-advantage-tools-27.9~16.04.1/RELEASES.md --- ubuntu-advantage-tools-27.8~16.04.1/RELEASES.md 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/RELEASES.md 1970-01-01 00:00:00.000000000 +0000 @@ -1,262 +0,0 @@ -# Ubuntu Advantage Client Releases - -## Supported Ubuntu Releases - -See the table under "Support Matrix for the client" in the [readme](./README.md). - -## Release versioning schemes - -Below are the versioning schemes used for publishing debs: - -| Build target | Version Format | -| --------------------------------------------------------------------------------- | ------------------------------------------ | -| [Daily PPA](https://code.launchpad.net/~canonical-server/+recipe/ua-client-daily) | `XX.YY-~g~ubuntu22.04.1` | -| Staging PPA | `XX.YY~22.04.1~rc1` | -| Stable PPA | `XX.YY~22.04.1~stableppa1` | -| Archive release | `XX.YY~22.04.1` | -| Archive bugfix release | `XX.YY.Z~22.04.1` | - -## Supported upgrade paths on same upstream version - -Regardless of source, the latest available "upstream version" (e.g. 27.4) will always be installed, because the upstream version comes first followed by a tilde in all version formats. - -This table demonstrates upgrade paths between sources for one particular upstream version. - -| Upgrade path | Version diff example | -| ------------------------------- | ----------------------------------------------------------------------- | -| Staging to Next Staging rev | `31.4~22.04.1~rc1` ➜ `31.4~22.04.1~rc2` | -| Staging to Stable | `31.4~22.04.1~rc2` ➜ `31.4~22.04.1~stableppa1` | -| Stable to Next Stable rev | `31.4~22.04.1~stableppa1` ➜ `31.4~22.04.1~stableppa2` | -| Stable to Archive | `31.4~22.04.1~stableppa2` ➜ `31.4~22.04.1` | -| LTS Archive to Next LTS Archive | `31.4~22.04.1` ➜ `31.4~24.04.1` | -| Archive to Daily | `31.4~24.04.1` ➜ `31.4-1500~g75fa134~ubuntu24.04.1` | -| Daily to Next Daily | `31.4-1500~g75fa134~ubuntu24.04.1` ➜ `31.4-1501~g3836375~ubuntu24.04.1` | - -## Process - - -### Background - -The release process for ubuntu-advantage-tools has three overarching steps/goals. - -1. Release to our team infrastructure. This includes Github and the `ua-client` PPAs. -2. Release to the latest ubuntu devel release. -3. Release to the supported ubuntu past releases via [SRU](https://wiki.ubuntu.com/StableReleaseUpdates) using the [ubuntu-advantage-tools specific SRU process](https://wiki.ubuntu.com/UbuntuAdvantageToolsUpdates). - -Generally speaking, these steps happen in order, but there is some overlap. Also we may backtrack if issues are found part way through the process. - -An average release should take somewhere between 10 and 14 calendar days if things go smoothly, starting at the decision to release and ending at the new version being available in all supported ubuntu releases. Note that it is not 2 weeks of full time work. Most of the time is spent waiting for review or sitting in proposed. - -### Prerequisites - -If this is your first time releasing ubuntu-advantage-tools, you'll need to do the following before getting started: - -* Add the team helper scripts to your PATH: [uss-tableflip](https://github.com/canonical/uss-tableflip). -* If you don't yet have a gpg key set up, follow the instructions - [here](https://help.launchpad.net/YourAccount/ImportingYourPGPKey) to create a key, - publish it to `hkp://keyserver.ubuntu.com`, and import it into Launchpad. -* Before you run `sbuild-it` for the first time, you'll need to set up a chroot for each Ubuntu release. - Run the following to set up chroots with dependencies pre-installed for each release: - ```bash - apt-get install sbuild-launchpad-chroot - bash ./tools/setup_sbuild.sh # This will give you usage information on how to call it with the correct parameters - ``` -* You must have launchpad already properly configured in your system in order to upload packages to the PPAs. Follow [this guide](https://help.launchpad.net/Packaging/PPA/Uploading) to get set up. - -### I. Preliminary/staging release to team infrastructure -1. Create a release PR - - a. Move the desired commits from our `main` branch onto the desired release branch - - * This step is currently not well defined. We currently are using `release-27` for all `27.X` releases and have been cherry-picking/rebasing all commits from `main` into this branch for a release. - - b Create a new entry in the `debian/changelog` file: - - * You can do that by running ` dch --newversion ` - * Remember to update the release from `UNRELEASED` to the ubuntu/devel release. Edit the version to look like: `27.2~21.10.1`, with the appropriate ua and ubuntu/devel version numbers. - * Populate `debian/changelog` with the commits you have cherry-picked - * You can do that by running `git log .. | log2dch` - * This will generate a list of commits that could be included in the changelog. - * You don't need to include all of the commits generated. Remember that the changelog should - be read by the user to understand the new features/modifications in the package. If you - think a commit will not add that much to the user experience, you can drop it from the - changelog - * To structure the changelog you can use the other entries as example. But we basically try to - keep this order: debian changes, new features/modifications, testing. Within each section, bullet points should be alphabetized. - - c. Create a PR on github into the release branch. Ask in the UA channel on mattermost for review. - - d. When reviewing the release PR, please use the following guidelines when reviewing the new changelog entry: - - * Is the version correctly updated ? We must ensure that the new version on the changelog is - correct and it also targets the latest Ubuntu release at the moment. - * Is the entry useful for the user ? The changelog entries should be user focused, meaning - that we should only add entries that we think users will care about (i.e. we don't need - entries when fixing a test, as this doesn't provide meaningful information to the user) - * Is this entry redundant ? Sometimes we may have changes that affect separate modules of the - code. We should have an entry only for the module that was most affected by it - * Is the changelog entry unique ? We need to verify that the changelog entry is not already - reflected in an earlier version of the changelog. If it is, we need not only to remove but double - check the process we are using to cherry-pick the commits - * Is this entry actually reflected on the code ? Sometimes, we can have changelog entries - that are not reflected in the code anymore. This can happen during development when we are - still unsure about the behavior of a feature or when we fix a bug that removes the code - that was added. We must verify each changelog entry that is added to be sure of their - presence in the product. - -2. After the release PR is merged, tag the head of the release branch with the version number, e.g. `27.1`. Push this tag to Github. - -3. Build the package for all Ubuntu releases and upload to `ppa:ua-client/staging` - - a. Clone the repository in a clean directory and switch to the release branch - * *WARNING* Build the package in a clean environment. The reason for that is because the package - will contain everything that it is present in the folder. If you are storing credentials or - other sensible development information in your folder, they will be uploaded too when we send - the package to the ppa. A clean environment is the safest way to perform this. - - b. Edit the changelog: - * List yourself as the author of this release. - * Edit the version number to look like: `27.2~20.04.1~rc1` (`~.~rc`) - * Edit the ubuntu release name. Start with the ubuntu/devel release (e.g. `impish`). - * `git commit -m "throwaway"` Do **not** push this commit! - - c. `build-package` - * This script will generate all the package artifacts in the parent directory as `../out`. - - d. `sbuild-it ../out/.dsc` - * If this succeeds move on. If this fails, debug and fix before continuing. - - e. Repeat 3.b through 3.d for all supported Ubuntu Releases - * PS: remember to also change the version number on the changelog. For example, suppose - the new version is `1.1~20.04.1~rc1`. If you want to test Bionic now, change it to - `1.1~18.04.1~rc1`. - - f. For each release, dput to the staging PPA: - * `dput ppa:ua-client/staging ../out/_source.changes` - * After each `dput` wait for the "Accepted" email from Launchpad before moving on. - -### II. Release to Ubuntu (devel and SRU) - -> Note: `impish` is used throughout as a reference to the current devel release. This will change. - -1. Prepare SRU Launchpad bugs. - - a. We do this even before a succesful merge into ubuntu/devel because the context added to these bugs is useful for the Server Team reviewer. - - b. Create a new bug on Launchpad for ubuntu-advantage-tools and use the format defined [here](https://wiki.ubuntu.com/UbuntuAdvantageToolsUpdates#SRU_Template) for the description. - * The title should be in the format `[SRU] ubuntu-advantage-tools (27.1 -> 27.2) Xenial, Bionic, Focal, Hirsute`, substituting version numbers and release names as necessary. - - c. For each Launchpad bug fixed by this release (which should all be referenced in our changelog), add the SRU template to the description and fill out each section. - * Leave the original description in the bug at the bottom under the header `[Original Description]`. - * For the testing steps, include steps to reproduce the bug. Then include instructions for adding `ppa:ua-client/staging`, and steps to verify the bug is no longer present. - -2. Set up the Merge Proposal (MP) for ubuntu/devel - - a. `git-ubuntu clone ubuntu-advantage-tools; cd ubuntu-advantage-tools` - - b. `git remote add upstream git@github.com:canonical/ubuntu-advantage-client.git` - - c. `git fetch upstream` - - d. `git rebase --onto pkg/ubuntu/devel ` - * e.g. `git rebase --onto pkg/ubuntu/devel 27.0.2 27.1` - * You may need to resolve conflicts, but hopefully these will be minimal. - * You'll end up in a detached state - - e. `git checkout -B upload--impish` - * This creates a new local branch name based on your detached branch - - f. Make sure the changelog version contains the release version in the name (For example, `27.1~21.10.1`) - - g. `git push upload--impish` - - h. On Launchpad, create a merge proposal for this version which targets `ubuntu/devel` - * For an example, see the [27.0.2 merge proposal](https://code.launchpad.net/~chad.smith/ubuntu/+source/ubuntu-advantage-tools/+git/ubuntu-advantage-tools/+merge/402459) - * Add 2 review slots for `canonical-server` and `canonical-server-core-reviewers` -3. Set up the MP for past Ubuntu releases based on the ubuntu/devel PR - - a. Create a PR for each target series based off your local `release-${UA_VERSION}-impish` branch: - * If you've followed the instructions precisely so far, you can just run `bash tools/create-lp-release-branches.sh`. - - b. Create merge proposals for each SRU target release @ `https://code.launchpad.net/~/ubuntu/+source/ubuntu-advantage-tools/+git/ubuntu-advantage-tools/`. Make sure each MP targets your `upload-${UA_VERSION}-impish` branch (the branch you are MP-ing into ubuntu/devel). - - c. Add both `canonical-server` and `canonical-server-core-reviewers` as review slots on each MP. - -4. Server Team Review - - a. Ask in ~Server for a review of your MPs. Include a link to the primary MP into ubuntu/devel and mention the other MPs are only changelog MPs for the SRUs into past releases. - - b. If they request changes, create a PR into the release branch on github and ask UAClient team for review. After that is merged, cherry-pick the commit into your `upload--` branch and push to launchpad. You'll also need to rebase the other `upload--` branches and force push them to launchpad. Then notify the Server Team member that you have addressed their requests. - * Some issues may just be filed for addressing in the future if they are not urgent or pertinent to this release. - * Unless the changes are very minor, or only testing related, you should upload a new release candidate version to `ppa:ua-client/staging` as descibed in I.3. - * After the release is finished, any commits that were merged directly into the release branch in this way should be brought back into `main` via a single PR. - - c. Once review is complete and approved, confirm that Ubuntu Server approver will be tagging the PR with the appropriate `upload/` tag so git-ubuntu will import rich commit history. - - d. At this point the Server Team member should **not** upload the version to the devel release. - * If they do, then any changes to the code after this point will require a bump in the patch version of the release. - - e. Ask Ubuntu Server approver if they also have upload rights to the proposed queue. If they do, request that they upload ubuntu-advantage-tools for all releases. If they do not, ask in ~Server channel for a Ubuntu Server team member with upload rights for an upload review of the MP for the proposed queue. - - f. Once upload review is complete and approved, confirm that Ubuntu Server approver will upload ua-tools via dput to the `-proposed` queue. - - g. Check the [-proposed release queue](https://launchpad.net/ubuntu/xenial/+queue?queue_state=1&queue_text=ubuntu-advantage-tools) for presence of ua-tools in unapproved state for each supported release. Note: libera chat #ubuntu-release IRC channel has a bot that reports queued uploads of any package in a message like "Unapproved: ubuntu-advantage-tools .. version". - -5. SRU Review - - a. Once unapproved ua-tools package is listed in the pending queue for each target release, [ping appropriate daily SRU vanguard for review of ua-tools into -proposed](https://wiki.ubuntu.com/StableReleaseUpdates#Publishing)via the libera.chat #ubuntu-release channel - - b. As soon as the SRU vanguard approves the packages, a bot in #ubuntu-release will announce that ubuntu-advantage-tools is accepted into the applicable -proposed pockets, or the [Xenial -proposed release rejection queue](https://launchpad.net/ubuntu/xenial/+queue?queue_state=4&queue_text=ubuntu-advantage-tools) will contain a reason for rejections. Double check the SRU process bug for any actionable review feedback. - - c. Once accepted into `-proposed` by an SRU vanguard [ubuntu-advantage-tools shows up in the pending_sru page](https://people.canonical.com/~ubuntu-archive/pending-sru.html), check `rmadison ubuntu-advantage-tools | grep -proposed` to see if the upload exists in -proposed yet. - - d. Confirm availability in -proposed pocket via - ```bash - cat > setup_proposed.sh <` and saving the output. - - g. After all tests have passed, tarball all of the output files and upload them to the SRU bug with a message that looks like this: - ``` - We have run the full ubuntu-advantage-tools integration test suite against the version in -proposed. The results are attached. All tests passed (or call out specific explained failures). - - You can verify the correct version was used by checking the output of the first test in each file, which prints the version number. - - I am marking the verification done for this SRU. - ``` - Change the tags on the bug from `verification-needed` to `verification-done` (including the verification tags for each release). - - h. For any other related Launchpad bugs that are fixed in this release. Perform the verification steps necessary for those bugs and mark them `verification-done` as needed. This will likely involve following the test steps, but instead of adding the staging PPA, enabling -proposed. - - i. Once all SRU bugs are tagged as `verification*-done`, all SRU-bugs should be listed as green in [the pending_sru page](https://people.canonical.com/~ubuntu-archive/pending-sru.html). - - j. After the pending sru page says that ubuntu-advantage-tools has been in proposed for 7 days, it is now time to ping the [current SRU vanguard](https://wiki.ubuntu.com/StableReleaseUpdates#Publishing) for acceptance of ubuntu-advantage-tools into -updates. - - k. Ping the Ubuntu Server team member who approved the version in step `II.4` to now upload to the devel release. - - l. Check `rmadison ubuntu-advantage-tools` for updated version in devel release - - m. Confirm availability in -updates pocket via `lxc launch ubuntu-daily: dev-i; lxc exec dev-i -- apt update; lxc exec dev-i -- apt-cache policy ubuntu-advantage-tools` - -### III. Github Repository Post-release Update - -1. Ensure the version tag is correct on github. The `version` git tag should point to the commit that was released as that version to ubuntu -updates. If changes were made in response to feedback during the release process, the tag may have to be moved. -2. Bring in any changes that were made to the release branch into `main` via PR (e.g. Changelog edits). - -## Cloud Images Update - -After the release process is finished, CPC must be informed. They will be responsible to update the cloud images using the package from the pockets it was released to (whether it is the `stable` PPA or the`-updates` pocket). diff -Nru ubuntu-advantage-tools-27.8~16.04.1/setup.py ubuntu-advantage-tools-27.9~16.04.1/setup.py --- ubuntu-advantage-tools-27.8~16.04.1/setup.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/setup.py 2022-05-18 19:44:15.000000000 +0000 @@ -15,8 +15,8 @@ def split_link_deps(reqs_filename): """Read requirements reqs_filename and split into pkgs and links - :return: list of package defs and link defs - """ + :return: list of package defs and link defs + """ pkgs = [] links = [] for line in open(reqs_filename).readlines(): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/cleanup_cloud_id_shim_ppa.sh ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/cleanup_cloud_id_shim_ppa.sh --- ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/cleanup_cloud_id_shim_ppa.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/cleanup_cloud_id_shim_ppa.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,64 @@ +ppa=$1 +series=xenial + +set -e + +GREEN="\e[32m" +RED="\e[31m" +BLUE="\e[36m" +END_COLOR="\e[0m" + +function cleanup { + lxc delete test --force +} + +function on_err { + echo -e "${RED}Test Failed${END_COLOR}" + cleanup + exit 1 +} + +trap on_err ERR + +function print_and_run_cmd { + echo -e "${BLUE}Running:${END_COLOR}" "$@" + echo -e "${BLUE}Output:${END_COLOR}" + lxc exec test -- sh -c "$@" + echo +} + +function explanatory_message { + echo -e "${BLUE}$@${END_COLOR}" +} + +explanatory_message "Starting $series container and updating ubuntu-advantage-tools" +lxc launch ubuntu-daily:$series test >/dev/null 2>&1 +sleep 10 + +lxc exec test -- add-apt-repository $ppa >/dev/null +lxc exec test -- apt-get update >/dev/null +lxc exec test -- apt-get install locate >/dev/null +lxc exec test -- apt-get dist-upgrade -y >/dev/null +print_and_run_cmd "ua version" + +explanatory_message "Note where all cloud-id-shim artifacts are before upgrade" +print_and_run_cmd "updatedb" +print_and_run_cmd "locate ubuntu-advantage-cloud-id-shim" + +explanatory_message "upgrade to bionic" +lxc exec test -- sh -c "cat > /etc/update-manager/release-upgrades.d/ua-test.cfg << EOF +[Sources] +AllowThirdParty=yes +EOF" +lxc exec test -- do-release-upgrade --frontend DistUpgradeViewNonInteractive >/dev/null + +print_and_run_cmd "ua version" + +explanatory_message "cloud-id-shim artifacts should be gone" +print_and_run_cmd "updatedb" +print_and_run_cmd "locate ubuntu-advantage-cloud-id-shim || true" +result=$(lxc exec test -- locate ubuntu-advantage-cloud-id-shim || true) +test -z "$result" + +echo -e "${GREEN}Test Passed${END_COLOR}" +cleanup diff -Nru ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/cleanup_cloud_id_shim.sh ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/cleanup_cloud_id_shim.sh --- ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/cleanup_cloud_id_shim.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/cleanup_cloud_id_shim.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,62 @@ +x_deb=$1 +b_deb=$2 +series=bionic + +set -e + +GREEN="\e[32m" +RED="\e[31m" +BLUE="\e[36m" +END_COLOR="\e[0m" + +function cleanup { + lxc delete test --force +} + +function on_err { + echo -e "${RED}Test Failed${END_COLOR}" + cleanup + exit 1 +} + +trap on_err ERR + +function print_and_run_cmd { + echo -e "${BLUE}Running:${END_COLOR}" "$@" + echo -e "${BLUE}Output:${END_COLOR}" + lxc exec test -- sh -c "$@" + echo +} + +function explanatory_message { + echo -e "${BLUE}$@${END_COLOR}" +} + +explanatory_message "Starting $series container and updating ubuntu-advantage-tools" +lxc launch ubuntu-daily:$series test >/dev/null 2>&1 +sleep 10 + +lxc exec test -- apt-get update >/dev/null +lxc exec test -- apt-get install -y ubuntu-advantage-tools locate >/dev/null +explanatory_message "installing xenial version of ubuntu-advantage-tools from local copy" +lxc file push $x_deb test/tmp/uax.deb > /dev/null +print_and_run_cmd "dpkg -i /tmp/uax.deb" +print_and_run_cmd "ua version" + +explanatory_message "Note where all cloud-id-shim artifacts are before upgrade" +print_and_run_cmd "updatedb" +print_and_run_cmd "locate ubuntu-advantage-cloud-id-shim" + +explanatory_message "installing bionic version of ubuntu-advantage-tools from local copy" +lxc file push $b_deb test/tmp/uab.deb > /dev/null +print_and_run_cmd "dpkg -i /tmp/uab.deb" +print_and_run_cmd "ua version" + +explanatory_message "cloud-id-shim artifacts should be gone" +print_and_run_cmd "updatedb" +print_and_run_cmd "locate ubuntu-advantage-cloud-id-shim || true" +result=$(lxc exec test -- locate ubuntu-advantage-cloud-id-shim || true) +test -z "$result" + +echo -e "${GREEN}Test Passed${END_COLOR}" +cleanup diff -Nru ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/cleanup_failed_old_license_check_timer.sh ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/cleanup_failed_old_license_check_timer.sh --- ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/cleanup_failed_old_license_check_timer.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/cleanup_failed_old_license_check_timer.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,57 @@ +series=$1 +deb=$2 + +set -e + +GREEN="\e[32m" +RED="\e[31m" +BLUE="\e[36m" +END_COLOR="\e[0m" + +function cleanup { + lxc delete test --force +} + +function on_err { + echo -e "${RED}Test Failed${END_COLOR}" + cleanup + exit 1 +} + +trap on_err ERR + +function print_and_run_cmd { + echo -e "${BLUE}Running:${END_COLOR}" "$@" + echo -e "${BLUE}Output:${END_COLOR}" + lxc exec test -- sh -c "$@" + echo +} + +function explanatory_message { + echo -e "${BLUE}$@${END_COLOR}" +} + +explanatory_message "Starting $series container and updating ubuntu-advantage-tools" +lxc launch ubuntu-daily:$series test >/dev/null 2>&1 +sleep 10 + +lxc exec test -- apt-get update >/dev/null +lxc exec test -- apt-get install -y ubuntu-advantage-tools locate >/dev/null +print_and_run_cmd "ua version" +explanatory_message "Start the timer to make sure its state is fixed on upgrade" +print_and_run_cmd "systemctl start ua-license-check.timer" +print_and_run_cmd "systemctl status ua-license-check.timer" + +explanatory_message "installing new version of ubuntu-advantage-tools from local copy" +lxc file push $deb test/tmp/ua.deb > /dev/null +print_and_run_cmd "dpkg -i /tmp/ua.deb" +print_and_run_cmd "ua version" + +explanatory_message "systemd should not list the timer as failed" +print_and_run_cmd "systemctl status ua-license-check.timer || true" +print_and_run_cmd "systemctl --no-pager | grep ua-license-check || true" +result=$(lxc exec test -- sh -c "systemctl --no-pager | grep ua-license-check.timer" || true) +echo "$result" | grep -qv "ua-license-check.timer\s\+not-found\s\+failed" + +echo -e "${GREEN}Test Passed${END_COLOR}" +cleanup diff -Nru ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/gcp_auto_attach_long_poll.sh ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/gcp_auto_attach_long_poll.sh --- ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/gcp_auto_attach_long_poll.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/gcp_auto_attach_long_poll.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,80 @@ +#!/bin/sh +ua_deb=$1 +ZONE="us-east1-b" +INSTANCE_NAME="test-auto-attach" +INSTANCE_TYPE="n1-standard-1" +DISK_NAME="persistent-disk-0" + +set -e + +GREEN="\e[32m" +RED="\e[31m" +BLUE="\e[36m" +END_COLOR="\e[0m" + +function cleanup { + gcloud compute ssh $INSTANCE_NAME -- "sudo ua detach --assume-yes || true" + gcloud compute instances delete $INSTANCE_NAME +} + +function on_err { + echo -e "${RED}Test Failed${END_COLOR}" + cleanup + exit 1 +} + +trap on_err ERR + +function print_and_run_cmd { + echo -e "${BLUE}Running:${END_COLOR}" "$@" + echo -e "${BLUE}Output:${END_COLOR}" + gcloud compute ssh $INSTANCE_NAME -- "sh -c \"$@\"" + echo +} + +function explanatory_message { + echo -e "${BLUE}$@${END_COLOR}" +} + +explanatory_message "Starting gcloud instance" +gcloud compute instances create $INSTANCE_NAME \ + --image="ubuntu-2004-focal-v20220404" \ + --image-project="ubuntu-os-cloud" \ + --machine-type=$INSTANCE_TYPE \ + --zone=$ZONE +sleep 60 + +explanatory_message "Installing new version of ubuntu-advantage-tools from local copy" +gcloud compute scp $ua_deb $INSTANCE_NAME:/tmp/ubuntu-advantage-tools.deb +gcloud compute ssh $INSTANCE_NAME -- "sudo apt update" +gcloud compute ssh $INSTANCE_NAME -- "sudo apt install ubuntu-advantage-tools -y" +print_and_run_cmd "sudo dpkg -i /tmp/ubuntu-advantage-tools.deb" + +explanatory_message "skip initial license check" +print_and_run_cmd "sudo sed -zi \\\"s/cloud.is_pro_license_present(\n wait_for_change=False\n )/False/\\\" /usr/lib/python3/dist-packages/uaclient/daemon.py" + +explanatory_message "turn on polling in config file" +print_and_run_cmd "sudo sh -c \\\"printf \\\\\\\" poll_for_pro_license: true\\\\\\\" >> /etc/ubuntu-advantage/uaclient.conf\\\"" + +explanatory_message "change won't happen while daemon is running, so set short timeout to simulate the long poll returning" +print_and_run_cmd "sudo sed -i \\\"s/wait_for_change=true/wait_for_change=true\&timeout_sec=5/\\\" /usr/lib/python3/dist-packages/uaclient/clouds/gcp.py" + +explanatory_message "Checking the status and logs beforehand" +print_and_run_cmd "sudo ua status --wait" +print_and_run_cmd "sudo cat /var/log/ubuntu-advantage-daemon.log" +gcloud compute ssh $INSTANCE_NAME -- "sudo truncate -s 0 /var/log/ubuntu-advantage-daemon.log" + +explanatory_message "Stopping the machine, adding license, restarting..." +gcloud compute instances stop $INSTANCE_NAME +gcloud beta compute disks update $INSTANCE_NAME --zone=$ZONE --update-user-licenses="https://www.googleapis.com/compute/v1/projects/ubuntu-os-pro-cloud/global/licenses/ubuntu-pro-2004-lts" +gcloud compute instances start $INSTANCE_NAME +sleep 60 + +explanatory_message "Now with the license, it will succeed auto_attaching" +print_and_run_cmd "sudo ua status --wait" +print_and_run_cmd "sudo cat /var/log/ubuntu-advantage-daemon.log" +result=$(gcloud compute ssh $INSTANCE_NAME -- "sudo ua status --format json") +echo $result | jq -r ".attached" | grep "true" + +echo -e "${GREEN}Test Passed${END_COLOR}" +cleanup diff -Nru ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/gcp_auto_attach_on_boot.sh ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/gcp_auto_attach_on_boot.sh --- ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/gcp_auto_attach_on_boot.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/gcp_auto_attach_on_boot.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,72 @@ +#!/bin/sh +ua_deb=$1 +ZONE="us-east1-b" +INSTANCE_NAME="test-auto-attach" +INSTANCE_TYPE="n1-standard-1" +DISK_NAME="persistent-disk-0" + +set -e + +GREEN="\e[32m" +RED="\e[31m" +BLUE="\e[36m" +END_COLOR="\e[0m" + +function cleanup { + gcloud compute ssh $INSTANCE_NAME -- "sudo ua detach --assume-yes || true" + gcloud compute instances delete $INSTANCE_NAME +} + +function on_err { + echo -e "${RED}Test Failed${END_COLOR}" + cleanup + exit 1 +} + +trap on_err ERR + +function print_and_run_cmd { + echo -e "${BLUE}Running:${END_COLOR}" "$@" + echo -e "${BLUE}Output:${END_COLOR}" + gcloud compute ssh $INSTANCE_NAME -- "sh -c \"$@\"" + echo +} + +function explanatory_message { + echo -e "${BLUE}$@${END_COLOR}" +} + +explanatory_message "Starting gcloud instance" +gcloud compute instances create $INSTANCE_NAME \ + --image="ubuntu-2004-focal-v20220404" \ + --image-project="ubuntu-os-cloud" \ + --machine-type=$INSTANCE_TYPE \ + --zone=$ZONE +sleep 60 + + +explanatory_message "Installing new version of ubuntu-advantage-tools from local copy" +gcloud compute scp $ua_deb $INSTANCE_NAME:/tmp/ubuntu-advantage-tools.deb +gcloud compute ssh $INSTANCE_NAME -- "sudo apt update" +gcloud compute ssh $INSTANCE_NAME -- "sudo apt install ubuntu-advantage-tools jq -y" +print_and_run_cmd "sudo dpkg -i /tmp/ubuntu-advantage-tools.deb" + +explanatory_message "Checking the status and logs beforehand" +print_and_run_cmd "sudo ua status --wait" +print_and_run_cmd "sudo cat /var/log/ubuntu-advantage-daemon.log" +print_and_run_cmd "sudo truncate -s 0 /var/log/ubuntu-advantage-daemon.log" + +explanatory_message "Stopping the machine, adding license, restarting..." +gcloud compute instances stop $INSTANCE_NAME +gcloud beta compute disks update $INSTANCE_NAME --zone=$ZONE --update-user-licenses="https://www.googleapis.com/compute/v1/projects/ubuntu-os-pro-cloud/global/licenses/ubuntu-pro-2004-lts" +gcloud compute instances start $INSTANCE_NAME +sleep 30 + +explanatory_message "Now with the license, it will succeed auto_attaching on boot" +print_and_run_cmd "sudo ua status --wait" +print_and_run_cmd "sudo cat /var/log/ubuntu-advantage-daemon.log" +result=$(gcloud compute ssh $INSTANCE_NAME -- "sudo ua status --format json") +echo $result | jq -r ".attached" | grep "true" + +echo -e "${GREEN}Test Passed${END_COLOR}" +cleanup diff -Nru ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/remove_old_license_check_timer.sh ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/remove_old_license_check_timer.sh --- ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/remove_old_license_check_timer.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/remove_old_license_check_timer.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,57 @@ +series=$1 +deb=$2 + +set -e + +GREEN="\e[32m" +RED="\e[31m" +BLUE="\e[36m" +END_COLOR="\e[0m" + +function cleanup { + lxc delete test --force +} + +function on_err { + echo -e "${RED}Test Failed${END_COLOR}" + cleanup + exit 1 +} + +trap on_err ERR + +function print_and_run_cmd { + echo -e "${BLUE}Running:${END_COLOR}" "$@" + echo -e "${BLUE}Output:${END_COLOR}" + lxc exec test -- sh -c "$@" + echo +} + +function explanatory_message { + echo -e "${BLUE}$@${END_COLOR}" +} + +explanatory_message "Starting $series container and updating ubuntu-advantage-tools" +lxc launch ubuntu-daily:$series test >/dev/null 2>&1 +sleep 10 + +lxc exec test -- apt-get update >/dev/null +lxc exec test -- apt-get install -y ubuntu-advantage-tools locate >/dev/null +print_and_run_cmd "ua version" +explanatory_message "Note where all license-check artifacts are before upgrade" +print_and_run_cmd "updatedb" +print_and_run_cmd "locate ua-license-check" + +explanatory_message "installing new version of ubuntu-advantage-tools from local copy" +lxc file push $deb test/tmp/ua.deb > /dev/null +print_and_run_cmd "dpkg -i /tmp/ua.deb" +print_and_run_cmd "ua version" + +explanatory_message "license-check artifacts should be gone" +print_and_run_cmd "updatedb" +print_and_run_cmd "locate ua-license-check || true" +result=$(lxc exec test -- locate ua-license-check || true) +test -z "$result" + +echo -e "${GREEN}Test Passed${END_COLOR}" +cleanup diff -Nru ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/remove_old_marker_file.sh ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/remove_old_marker_file.sh --- ubuntu-advantage-tools-27.8~16.04.1/sru/release-27.9/remove_old_marker_file.sh 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/sru/release-27.9/remove_old_marker_file.sh 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,58 @@ +series=$1 +deb=$2 + +set -e + +GREEN="\e[32m" +RED="\e[31m" +BLUE="\e[36m" +END_COLOR="\e[0m" + +function cleanup { + lxc delete test --force +} + +function on_err { + echo -e "${RED}Test Failed${END_COLOR}" + cleanup + exit 1 +} + +trap on_err ERR + +function print_and_run_cmd { + echo -e "${BLUE}Running:${END_COLOR}" "$@" + echo -e "${BLUE}Output:${END_COLOR}" + lxc exec test -- sh -c "$@" + echo +} + +function explanatory_message { + echo -e "${BLUE}$@${END_COLOR}" +} + + +explanatory_message "Starting $series container and updating ubuntu-advantage-tools" +lxc launch ubuntu-daily:$series test >/dev/null 2>&1 +sleep 10 + +lxc exec test -- apt-get update >/dev/null +lxc exec test -- apt-get install -y ubuntu-advantage-tools >/dev/null +print_and_run_cmd "ua version" +explanatory_message "manually creating the marker file" +print_and_run_cmd "touch /var/lib/ubuntu-advantage/marker-license-check" +print_and_run_cmd "ls /var/lib/ubuntu-advantage" + +explanatory_message "installing new version of ubuntu-advantage-tools from local copy" +lxc file push $deb test/tmp/ua.deb > /dev/null +print_and_run_cmd "dpkg -i /tmp/ua.deb" +print_and_run_cmd "ua version" + + +explanatory_message "make sure the marker file is not longer there" +print_and_run_cmd "ls /var/lib/ubuntu-advantage" +result=$(lxc exec test -- ls /var/lib/ubuntu-advantage) +echo "$result" | grep -v marker-license-check + +echo -e "${GREEN}Test Passed${END_COLOR}" +cleanup diff -Nru ubuntu-advantage-tools-27.8~16.04.1/systemd/ua-license-check.path ubuntu-advantage-tools-27.9~16.04.1/systemd/ua-license-check.path --- ubuntu-advantage-tools-27.8~16.04.1/systemd/ua-license-check.path 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/systemd/ua-license-check.path 1970-01-01 00:00:00.000000000 +0000 @@ -1,14 +0,0 @@ -# The marker file used here is only created when on a GCP Ubuntu LTS instance -# that is not already using Ubuntu Advantage services. -# This path triggers a timer that will periodically poll the metadata for a GCP -# instance. If the user has added an Ubuntu Pro license to the instance, it will -# activate Ubuntu Advantage services. -[Unit] -Description=Trigger to poll for Ubuntu Pro licenses (Only enabled on GCP LTS non-pro) - -[Path] -PathExists=/var/lib/ubuntu-advantage/marker-license-check -Unit=ua-license-check.timer - -[Install] -WantedBy=multi-user.target diff -Nru ubuntu-advantage-tools-27.8~16.04.1/systemd/ua-license-check.service ubuntu-advantage-tools-27.9~16.04.1/systemd/ua-license-check.service --- ubuntu-advantage-tools-27.8~16.04.1/systemd/ua-license-check.service 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/systemd/ua-license-check.service 1970-01-01 00:00:00.000000000 +0000 @@ -1,11 +0,0 @@ -# This service is only activated when on a GCP Ubuntu LTS instance that is not -# already using Ubuntu Advantage services (via ua-license-check.timer). -# This service polls the metadata for a GCP instance. If the user has added an -# Ubuntu Pro license to the instance, it will activate Ubuntu Advantage services. -[Unit] -Description=Poll for Ubuntu Pro licenses (Only enabled on GCP LTS non-pro) -After=network.target network-online.target systemd-networkd.service ua-auto-attach.service - -[Service] -Type=oneshot -ExecStart=/usr/bin/python3 /usr/lib/ubuntu-advantage/license_check.py diff -Nru ubuntu-advantage-tools-27.8~16.04.1/systemd/ua-license-check.timer ubuntu-advantage-tools-27.9~16.04.1/systemd/ua-license-check.timer --- ubuntu-advantage-tools-27.8~16.04.1/systemd/ua-license-check.timer 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/systemd/ua-license-check.timer 1970-01-01 00:00:00.000000000 +0000 @@ -1,12 +0,0 @@ -# This timer is only activated when on a GCP Ubuntu LTS instance that is not -# already using Ubuntu Advantage services (via ua-license-check.path). -# This timer triggers a service that will poll the metadata for a GCP instance. -# If the user has added an Ubuntu Pro license to the instance, it will -# activate Ubuntu Advantage services. -[Unit] -Description=Timer to poll for Ubuntu Pro licenses (Only enabled on GCP LTS non-pro) - -[Timer] -OnCalendar=*:0/5 -RandomizedDelaySec=5min -OnStartupSec=2min diff -Nru ubuntu-advantage-tools-27.8~16.04.1/systemd/ubuntu-advantage-cloud-id-shim.service ubuntu-advantage-tools-27.9~16.04.1/systemd/ubuntu-advantage-cloud-id-shim.service --- ubuntu-advantage-tools-27.8~16.04.1/systemd/ubuntu-advantage-cloud-id-shim.service 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/systemd/ubuntu-advantage-cloud-id-shim.service 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,21 @@ +# This service exists to create a cloud-id-x file if it doesn't already exist. +# This is only activated on Xenial systems and will only run if cloud-init is +# less than version 22.1 +# Creating the cloud-id-x file allows ubuntu-advantage.service to activate on +# the correct platforms. + +[Unit] +Description=cloud-id shim +After=cloud-config.service +Before=ubuntu-advantage.service +# Only run if cloud-init is installed and ran +ConditionPathExists=/run/cloud-init/instance-data.json +# Only run if cloud-init didn't create the cloud-id file +ConditionPathExists=!/run/cloud-init/cloud-id + +[Service] +Type=oneshot +ExecStart=/bin/sh /usr/lib/ubuntu-advantage/cloud-id-shim.sh + +[Install] +WantedBy=multi-user.target diff -Nru ubuntu-advantage-tools-27.8~16.04.1/systemd/ubuntu-advantage.service ubuntu-advantage-tools-27.9~16.04.1/systemd/ubuntu-advantage.service --- ubuntu-advantage-tools-27.8~16.04.1/systemd/ubuntu-advantage.service 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/systemd/ubuntu-advantage.service 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,26 @@ +# This service only runs on GCP to enable auto-attaching to Ubuntu Advantage +# services when an Ubuntu Pro license is added to a GCP machine. +# If you are uninterested in the (free for personal use) Ubuntu Advantage +# services, including security updates after standard EOL and kernel patching +# without rebooting, then you can safely stop and disable this service: +# sudo systemctl stop ubuntu-advantage.service +# sudo systemctl disable ubuntu-advantage.service + +[Unit] +Description=Ubuntu Advantage GCP Auto Attach Daemon +Documentation=man:ubuntu-advantage https://ubuntu.com/advantage +After=network.target network-online.target systemd-networkd.service ua-auto-attach.service cloud-config.service ubuntu-advantage-cloud-id-shim.service + +# Only run if not already attached +ConditionPathExists=!/var/lib/ubuntu-advantage/private/machine-token.json +# Only run on GCP +ConditionPathExists=/run/cloud-init/cloud-id-gce + +[Service] +Type=notify +NotifyAccess=main +ExecStart=/usr/bin/python3 /usr/lib/ubuntu-advantage/daemon.py +WorkingDirectory=/var/lib/ubuntu-advantage/ + +[Install] +WantedBy=multi-user.target diff -Nru ubuntu-advantage-tools-27.8~16.04.1/tools/constraints-mypy.txt ubuntu-advantage-tools-27.9~16.04.1/tools/constraints-mypy.txt --- ubuntu-advantage-tools-27.8~16.04.1/tools/constraints-mypy.txt 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/tools/constraints-mypy.txt 2022-05-18 19:44:15.000000000 +0000 @@ -1,4 +1,5 @@ mypy +pyparsing==3.0.7 pytest==6.1.2 importlib-metadata==3.3.0 packaging==20.9 diff -Nru ubuntu-advantage-tools-27.8~16.04.1/tools/refresh-gcp-pro-ids.py ubuntu-advantage-tools-27.9~16.04.1/tools/refresh-gcp-pro-ids.py --- ubuntu-advantage-tools-27.8~16.04.1/tools/refresh-gcp-pro-ids.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/tools/refresh-gcp-pro-ids.py 2022-05-18 19:44:15.000000000 +0000 @@ -42,12 +42,16 @@ if not image["name"].startswith("ubuntu-pro"): continue - m = re.match(r"^ubuntu-pro-\d+-(?P\w+)-v\d+", image["name"]) + m = re.match( + r"^ubuntu-pro-(fips-)?\d+-(?P\w+)-v\d+", image["name"] + ) if not m: print("Skipping unexpected image name: ", image["name"]) continue elif m.group("release") in SUPPORTED_SERIES: release = m.group("release") + if "ubuntu-pro-fips" in image["name"]: + release = release + "-fips" series_bucket[release].append(image["name"]) for series, images in series_bucket.items(): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/tools/refresh-keyrings.sh ubuntu-advantage-tools-27.9~16.04.1/tools/refresh-keyrings.sh --- ubuntu-advantage-tools-27.8~16.04.1/tools/refresh-keyrings.sh 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/tools/refresh-keyrings.sh 2022-05-18 19:44:15.000000000 +0000 @@ -9,9 +9,8 @@ # # N.B. This will rename any existing keyrings with the suffix .old. -# NOTE: If replacing keyrings on services that are intended for trusty, the -# keyrings MUST BE pulled on a trusty machine to ensure compatibility with -# trusty gpg tooling. +# NOTE: The keyrings MUST be pulled on a machine running the lowest-supported +# LTS to ensure compatibility with gpg tooling on the lowest-supported LTS. tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/tools/run-integration-tests.py ubuntu-advantage-tools-27.9~16.04.1/tools/run-integration-tests.py --- ubuntu-advantage-tools-27.8~16.04.1/tools/run-integration-tests.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/tools/run-integration-tests.py 2022-05-18 19:44:15.000000000 +0000 @@ -25,12 +25,15 @@ PLATFORM_SERIES_TESTS = { "azuregeneric": ["xenial", "bionic", "focal"], "azurepro": ["xenial", "bionic", "focal"], + "azurepro-fips": ["xenial", "bionic", "focal"], "awsgeneric": ["xenial", "bionic", "focal"], "awspro": ["xenial", "bionic", "focal"], + "awspro-fips": ["xenial", "bionic", "focal"], + "docker": ["focal"], "gcpgeneric": ["xenial", "bionic", "focal", "impish", "jammy"], "gcppro": ["xenial", "bionic", "focal"], - "vm": ["xenial", "bionic", "focal"], "lxd": ["xenial", "bionic", "focal", "impish", "jammy"], + "vm": ["xenial", "bionic", "focal"], "upgrade": ["xenial", "bionic", "focal", "impish"], } @@ -69,12 +72,6 @@ envvar = TOKEN_TO_ENVVAR[t] env[envvar] = credentials["token"].get(envvar) - # Inject cloud-specific variables from credentials - for cloud in ("azure", "aws", "gcp"): - if cloud in p: - for envvar in credentials[cloud]: - env[envvar] = credentials[cloud].get(envvar) - # Until we don't get sbuild to run just once per run, env["UACLIENT_BEHAVE_SNAPSHOT_STRATEGY"] = "1" diff -Nru ubuntu-advantage-tools-27.8~16.04.1/tools/ua.bash ubuntu-advantage-tools-27.9~16.04.1/tools/ua.bash --- ubuntu-advantage-tools-27.8~16.04.1/tools/ua.bash 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/tools/ua.bash 2022-05-18 19:44:15.000000000 +0000 @@ -2,7 +2,9 @@ SERVICES=$(python3 -c " from uaclient.entitlements import valid_services -print(*valid_services(), sep=' ') +from uaclient.config import UAConfig +cfg = UAConfig() +print(*valid_services(cfg=cfg), sep=' ') ") _ua_complete() diff -Nru ubuntu-advantage-tools-27.8~16.04.1/tools/ua-test-credentials.example.yaml ubuntu-advantage-tools-27.9~16.04.1/tools/ua-test-credentials.example.yaml --- ubuntu-advantage-tools-27.8~16.04.1/tools/ua-test-credentials.example.yaml 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/tools/ua-test-credentials.example.yaml 2022-05-18 19:44:15.000000000 +0000 @@ -4,17 +4,3 @@ UACLIENT_BEHAVE_CONTRACT_TOKEN: UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING: UACLIENT_BEHAVE_CONTRACT_TOKEN_STAGING_EXPIRED: - -azure: - UACLIENT_BEHAVE_AZ_CLIENT_ID: - UACLIENT_BEHAVE_AZ_CLIENT_SECRET: - UACLIENT_BEHAVE_AZ_SUBSCRIPTION_ID: - UACLIENT_BEHAVE_AZ_TENANT_ID: - -aws: - UACLIENT_BEHAVE_AWS_ACCESS_KEY_ID: - UACLIENT_BEHAVE_AWS_SECRET_ACCESS_KEY: - -gcp: - UACLIENT_BEHAVE_GCP_CREDENTIALS_PATH: - UACLIENT_BEHAVE_GCP_PROJECT: diff -Nru ubuntu-advantage-tools-27.8~16.04.1/tox.ini ubuntu-advantage-tools-27.9~16.04.1/tox.ini --- ubuntu-advantage-tools-27.8~16.04.1/tox.ini 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/tox.ini 2022-05-18 19:44:15.000000000 +0000 @@ -29,6 +29,9 @@ isort: -rdev-requirements.txt behave: -rintegration-requirements.txt passenv = + GOOGLE_APPLICATION_CREDENTIALS + PYCLOUDLIB_CONFIG + AZURE_CONFIG_DIR UACLIENT_BEHAVE_* TRAVIS TRAVIS_* @@ -42,7 +45,9 @@ azurepro-fips: UACLIENT_BEHAVE_MACHINE_TYPE = azure.pro.fips gcpgeneric: UACLIENT_BEHAVE_MACHINE_TYPE = gcp.generic gcppro: UACLIENT_BEHAVE_MACHINE_TYPE = gcp.pro + gcppro-fips: UACLIENT_BEHAVE_MACHINE_TYPE = gcp.pro.fips vm: UACLIENT_BEHAVE_MACHINE_TYPE = lxd.vm + docker: UACLIENT_BEHAVE_MACHINE_TYPE = lxd.vm commands = py3: py.test --junitxml=pytest_results.xml {posargs:--cov uaclient uaclient} flake8: flake8 uaclient lib setup.py @@ -52,45 +57,63 @@ mypy-focal: mypy --python-version 3.7 uaclient/ features/ lib/ black: black --check --diff uaclient/ features/ lib/ setup.py isort: isort --check --diff uaclient/ features/ lib/ setup.py + behave-lxd-16.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.container" --tags="series.xenial,series.lts,series.all" --tags="~upgrade" behave-lxd-18.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.container" --tags="series.bionic,series.lts,series.all" --tags="~upgrade" behave-lxd-20.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.container" --tags="series.focal,series.lts,series.all" --tags="~upgrade" behave-lxd-21.10: behave -v {posargs} --tags="uses.config.machine_type.lxd.container" --tags="series.impish,series.all" --tags="~upgrade" behave-lxd-22.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.container" --tags="series.jammy,series.lts,series.all" --tags="~upgrade" + behave-vm-16.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.vm" --tags="series.xenial,series.all,series.lts" --tags="~upgrade" behave-vm-18.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.vm" --tags="series.bionic,series.all,series.lts" --tags="~upgrade" - behave-vm-20.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.vm" --tags="series.focal,series.all,series.lts" --tags="~upgrade" + behave-vm-20.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.vm" --tags="series.focal,series.all,series.lts" --tags="~upgrade" --tags="~docker" behave-vm-22.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.vm" --tags="series.jammy,series.all,series.lts" --tags="~upgrade" + behave-upgrade-16.04: behave -v {posargs} --tags="upgrade" --tags="series.xenial,series.all" behave-upgrade-18.04: behave -v {posargs} --tags="upgrade" --tags="series.bionic,series.all" behave-upgrade-20.04: behave -v {posargs} --tags="upgrade" --tags="series.focal,series.all" behave-upgrade-21.10: behave -v {posargs} --tags="upgrade" --tags="series.impish,series.all" + behave-upgrade-22.04: behave -v {posargs} --tags="upgrade" --tags="series.jammy,series.all" + + behave-docker-20.04: behave -v {posargs} --tags="uses.config.machine_type.lxd.vm" --tags="series.focal" features/docker.feature + behave-awsgeneric-16.04: behave -v {posargs} --tags="uses.config.machine_type.aws.generic" --tags="series.xenial,series.lts,series.all" --tags="~upgrade" behave-awsgeneric-18.04: behave -v {posargs} --tags="uses.config.machine_type.aws.generic" --tags="series.bionic,series.lts,series.all" --tags="~upgrade" behave-awsgeneric-20.04: behave -v {posargs} --tags="uses.config.machine_type.aws.generic" --tags="series.focal,series.lts,series.all" --tags="~upgrade" + behave-awsgeneric-22.04: behave -v {posargs} --tags="uses.config.machine_type.aws.generic" --tags="series.jammy,series.lts,series.all" --tags="~upgrade" + behave-awspro-16.04: behave -v {posargs} --tags="uses.config.machine_type.aws.pro" --tags="series.xenial,series.lts,series.all" behave-awspro-18.04: behave -v {posargs} --tags="uses.config.machine_type.aws.pro" --tags="series.bionic,series.lts,series.all" behave-awspro-20.04: behave -v {posargs} --tags="uses.config.machine_type.aws.pro" --tags="series.focal,series.lts,series.all" + behave-awspro-fips-16.04: behave -v {posargs} --tags="uses.config.machine_type.aws.pro.fips" --tags="series.xenial,series.lts,series.all" behave-awspro-fips-18.04: behave -v {posargs} --tags="uses.config.machine_type.aws.pro.fips" --tags="series.bionic,series.lts,series.all" behave-awspro-fips-20.04: behave -v {posargs} --tags="uses.config.machine_type.aws.pro.fips" --tags="series.focal,series.lts,series.all" + behave-azuregeneric-16.04: behave -v {posargs} --tags="uses.config.machine_type.azure.generic" --tags="series.xenial,series.lts,series.all" --tags="~upgrade" behave-azuregeneric-18.04: behave -v {posargs} --tags="uses.config.machine_type.azure.generic" --tags="series.bionic,series.lts,series.all" --tags="~upgrade" behave-azuregeneric-20.04: behave -v {posargs} --tags="uses.config.machine_type.azure.generic" --tags="series.focal,series.lts,series.all" --tags="~upgrade" + behave-azuregeneric-22.04: behave -v {posargs} --tags="uses.config.machine_type.azure.generic" --tags="series.jammy,series.lts,series.all" --tags="~upgrade" + behave-azurepro-16.04: behave -v {posargs} --tags="uses.config.machine_type.azure.pro" --tags="series.xenial,series.lts,series.all" behave-azurepro-18.04: behave -v {posargs} --tags="uses.config.machine_type.azure.pro" --tags="series.bionic,series.lts,series.all" behave-azurepro-20.04: behave -v {posargs} --tags="uses.config.machine_type.azure.pro" --tags="series.focal,series.lts,series.all" + behave-azurepro-fips-16.04: behave -v {posargs} --tags="uses.config.machine_type.azure.pro.fips" --tags="series.xenial,series.lts,series.all" behave-azurepro-fips-18.04: behave -v {posargs} --tags="uses.config.machine_type.azure.pro.fips" --tags="series.bionic,series.lts,series.all" behave-azurepro-fips-20.04: behave -v {posargs} --tags="uses.config.machine_type.azure.pro.fips" --tags="series.focal,series.lts,series.all" + behave-gcpgeneric-16.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.generic" --tags="series.xenial,series.lts,series.all" --tags="~upgrade" behave-gcpgeneric-18.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.generic" --tags="series.bionic,series.lts,series.all" --tags="~upgrade" behave-gcpgeneric-20.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.generic" --tags="series.focal,series.lts,series.all" --tags="~upgrade" behave-gcpgeneric-21.10: behave -v {posargs} --tags="uses.config.machine_type.gcp.generic" --tags="series.impish,series.all" --tags="~upgrade" behave-gcpgeneric-22.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.generic" --tags="series.jammy,series.lts,series.all" --tags="~upgrade" + behave-gcppro-16.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.pro" --tags="series.xenial,series.lts,series.all" --tags="~upgrade" behave-gcppro-18.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.pro" --tags="series.bionic,series.lts,series.all" --tags="~upgrade" behave-gcppro-20.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.pro" --tags="series.focal,series.lts,series.all" --tags="~upgrade" + behave-gcppro-fips-18.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.pro.fips" --tags="series.bionic,series.lts,series.all" --tags="~upgrade" + behave-gcppro-fips-20.04: behave -v {posargs} --tags="uses.config.machine_type.gcp.pro.fips" --tags="series.focal,series.lts,series.all" --tags="~upgrade" [flake8] # E251: Older versions of flake8 et al don't permit the diff -Nru ubuntu-advantage-tools-27.8~16.04.1/types-requirements.txt ubuntu-advantage-tools-27.9~16.04.1/types-requirements.txt --- ubuntu-advantage-tools-27.8~16.04.1/types-requirements.txt 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/types-requirements.txt 2022-05-18 19:44:15.000000000 +0000 @@ -1,2 +1,3 @@ mypy -types-PyYAML \ No newline at end of file +types-PyYAML +types-toml diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/actions.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/actions.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/actions.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/actions.py 2022-05-18 19:44:15.000000000 +0000 @@ -11,6 +11,7 @@ exceptions, messages, ) +from uaclient import status as ua_status from uaclient.clouds import identity LOG = logging.getLogger("ua.actions") @@ -34,12 +35,14 @@ cfg, token, allow_enable=allow_enable ) except exceptions.UrlError as exc: - cfg.status() # Persist updated status in the event of partial attach + # Persist updated status in the event of partial attach + ua_status.status(cfg=cfg) update_apt_and_motd_messages(cfg) raise exc except exceptions.UserFacingError as exc: event.info(exc.msg, file_type=sys.stderr) - cfg.status() # Persist updated status in the event of partial attach + # Persist updated status in the event of partial attach + ua_status.status(cfg=cfg) update_apt_and_motd_messages(cfg) raise exc @@ -91,7 +94,7 @@ :raise EntitlementNotFoundError: If no entitlement with the given name is found, then raises this error. """ - ent_cls = entitlements.entitlement_factory(name) + ent_cls = entitlements.entitlement_factory(cfg=cfg, name=name) entitlement = ent_cls( cfg, assume_yes=assume_yes, allow_beta=allow_beta, called_name=name ) @@ -108,11 +111,11 @@ Construct the current UA status dictionary. """ if simulate_with_token: - status, ret = cfg.simulate_status( - token=simulate_with_token, show_beta=show_beta + status, ret = ua_status.simulate_status( + cfg=cfg, token=simulate_with_token, show_beta=show_beta ) else: - status = cfg.status(show_beta=show_beta) + status = ua_status.status(cfg=cfg, show_beta=show_beta) ret = 0 return status, ret diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/apt.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/apt.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/apt.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/apt.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,9 +1,11 @@ +import enum import glob import logging import os import re import subprocess import tempfile +from functools import lru_cache from typing import Dict, List, Optional from uaclient import event_logger, exceptions, gpg, messages, util @@ -13,8 +15,14 @@ APT_CONFIG_AUTH_FILE = "Dir::Etc::netrc/" APT_CONFIG_AUTH_PARTS_DIR = "Dir::Etc::netrcparts/" APT_CONFIG_LISTS_DIR = "Dir::State::lists/" -APT_CONFIG_PROXY_HTTP = """Acquire::http::Proxy "{proxy_url}";\n""" -APT_CONFIG_PROXY_HTTPS = """Acquire::https::Proxy "{proxy_url}";\n""" +APT_CONFIG_GLOBAL_PROXY_HTTP = """Acquire::http::Proxy "{proxy_url}";\n""" +APT_CONFIG_GLOBAL_PROXY_HTTPS = """Acquire::https::Proxy "{proxy_url}";\n""" +APT_CONFIG_UA_PROXY_HTTP = ( + """Acquire::http::Proxy::esm.ubuntu.com "{proxy_url}";\n""" +) +APT_CONFIG_UA_PROXY_HTTPS = ( + """Acquire::https::Proxy::esm.ubuntu.com "{proxy_url}";\n""" +) APT_KEYS_DIR = "/etc/apt/trusted.gpg.d" KEYRINGS_DIR = "/usr/share/keyrings" APT_METHOD_HTTPS_FILE = "/usr/lib/apt/methods/https" @@ -30,6 +38,12 @@ event = event_logger.get_event_logger() +@enum.unique +class AptProxyScope(enum.Enum): + GLOBAL = object() + UACLIENT = object() + + def assert_valid_apt_credentials(repo_url, username, password): """Validate apt credentials for a PPA. @@ -83,7 +97,7 @@ def _parse_apt_update_for_invalid_apt_config( - apt_error: str + apt_error: str, ) -> Optional[messages.NamedMessage]: """Parse apt update errors for invalid apt config in user machine. @@ -166,6 +180,16 @@ return out +@lru_cache(maxsize=None) +def run_apt_cache_policy_command( + error_msg: Optional[str] = None, + env: Optional[Dict[str, str]] = {}, +) -> str: + return run_apt_command( + cmd=["apt-cache", "policy"], error_msg=error_msg, env=env + ) + + def run_apt_update_command(env: Optional[Dict[str, str]] = {}) -> str: try: out = run_apt_command(cmd=["apt-get", "update"], env=env) @@ -178,6 +202,11 @@ msg=messages.APT_UPDATE_FAILED.msg + "\n" + e.msg, msg_code=messages.APT_UPDATE_FAILED.name, ) + finally: + # Whenever we run an apt-get update command, we must invalidate + # the existing apt-cache policy cache. Otherwise, we could provide + # users with incorrect values. + run_apt_cache_policy_command.cache_clear() return out @@ -442,7 +471,9 @@ def setup_apt_proxy( - http_proxy: Optional[str] = None, https_proxy: Optional[str] = None + http_proxy: Optional[str] = None, + https_proxy: Optional[str] = None, + proxy_scope: Optional[AptProxyScope] = AptProxyScope.GLOBAL, ) -> None: """ Writes an apt conf file that configures apt to use the proxies provided as @@ -456,15 +487,36 @@ :return: None """ if http_proxy or https_proxy: - event.info(messages.SETTING_SERVICE_PROXY.format(service="APT")) + if proxy_scope: + message = "" + if proxy_scope == AptProxyScope.UACLIENT: + message = "UA-scoped" + elif proxy_scope == AptProxyScope.GLOBAL: + message = "global" + event.info( + messages.SETTING_SERVICE_PROXY_SCOPE.format(scope=message) + ) apt_proxy_config = "" if http_proxy: - apt_proxy_config += APT_CONFIG_PROXY_HTTP.format(proxy_url=http_proxy) + if proxy_scope == AptProxyScope.UACLIENT: + apt_proxy_config += APT_CONFIG_UA_PROXY_HTTP.format( + proxy_url=http_proxy + ) + elif proxy_scope == AptProxyScope.GLOBAL: + apt_proxy_config += APT_CONFIG_GLOBAL_PROXY_HTTP.format( + proxy_url=http_proxy + ) if https_proxy: - apt_proxy_config += APT_CONFIG_PROXY_HTTPS.format( - proxy_url=https_proxy - ) + if proxy_scope == AptProxyScope.UACLIENT: + apt_proxy_config += APT_CONFIG_UA_PROXY_HTTPS.format( + proxy_url=https_proxy + ) + elif proxy_scope == AptProxyScope.GLOBAL: + apt_proxy_config += APT_CONFIG_GLOBAL_PROXY_HTTPS.format( + proxy_url=https_proxy + ) + if apt_proxy_config != "": apt_proxy_config = messages.APT_PROXY_CONFIG_HEADER + apt_proxy_config diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/cli.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/cli.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/cli.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/cli.py 2022-05-18 19:44:15.000000000 +0000 @@ -15,7 +15,7 @@ import textwrap import time from functools import wraps -from typing import List, Optional, Tuple # noqa +from typing import Dict, List, Optional, Tuple # noqa import yaml @@ -23,10 +23,10 @@ actions, config, contract, + daemon, entitlements, event_logger, exceptions, - jobs, lock, messages, security, @@ -34,6 +34,7 @@ ) from uaclient import status as ua_status from uaclient import util, version +from uaclient.apt import AptProxyScope, setup_apt_proxy from uaclient.clouds import AutoAttachCloudInstance # noqa: F401 from uaclient.clouds import identity from uaclient.data_types import AttachActionsConfigFile, IncorrectTypeError @@ -41,14 +42,24 @@ CLOUD_BUILD_INFO, CONFIG_FIELD_ENVVAR_ALLOWLIST, DEFAULT_CONFIG_FILE, + DEFAULT_LOG_FORMAT, PRINT_WRAP_WIDTH, ) +from uaclient.entitlements.entitlement_status import ( + ApplicationStatus, + CanDisableFailure, + CanEnableFailure, + CanEnableFailureReason, +) # TODO: Better address service commands running on cli # It is not ideal for us to import an entitlement directly on the cli module. # We need to refactor this to avoid that type of coupling in the code. from uaclient.entitlements.livepatch import LIVEPATCH_CMD -from uaclient.jobs.update_messaging import update_apt_and_motd_messages +from uaclient.jobs.update_messaging import ( + refresh_motd, + update_apt_and_motd_messages, +) NAME = "ua" @@ -65,10 +76,6 @@ """ UA_AUTH_TOKEN_URL = "https://auth.contracts.canonical.com" -DEFAULT_LOG_FORMAT = ( - "%(asctime)s - %(filename)s:(%(lineno)d) [%(levelname)s]: %(message)s" -) - STATUS_FORMATS = ["tabular", "json", "yaml"] UA_COLLECT_LOGS_FILE = "ua_logs.tar.gz" @@ -79,9 +86,7 @@ "ua-auto-attach.path", "ua-auto-attach.service", "ua-reboot-cmds.service", - "ua-license-check.path", - "ua-license-check.service", - "ua-license-check.timer", + "ubuntu-advantage.service", ) event = event_logger.get_event_logger() @@ -111,9 +116,10 @@ def print_help(self, file=None, show_all=False): if self.base_desc: - non_beta_services_desc, beta_services_desc = ( - UAArgumentParser._get_service_descriptions() - ) + ( + non_beta_services_desc, + beta_services_desc, + ) = UAArgumentParser._get_service_descriptions() service_descriptions = sorted(non_beta_services_desc) if show_all: service_descriptions = sorted( @@ -126,6 +132,8 @@ @staticmethod def _get_service_descriptions() -> Tuple[List[str], List[str]]: + cfg = config.UAConfig() + service_info_tmpl = " - {name}: {description}{url}" non_beta_services_desc = [] beta_services_desc = [] @@ -133,7 +141,9 @@ resources = contract.get_available_resources(config.UAConfig()) for resource in resources: try: - ent_cls = entitlements.entitlement_factory(resource["name"]) + ent_cls = entitlements.entitlement_factory( + cfg=cfg, name=resource["name"] + ) except exceptions.EntitlementNotFoundError: continue # Because we don't know the presentation name if unattached @@ -206,20 +216,22 @@ return new_f -def assert_attached(unattached_msg_tmpl=None): +def assert_attached(msg_function=None): """Decorator asserting attached config. - - :param unattached_msg_tmpl: Optional msg template to format if raising an - UnattachedError + :param msg_function: Optional function to generate a custom message + if raising an UnattachedError """ def wrapper(f): @wraps(f) def new_f(args, cfg, **kwargs): if not cfg.is_attached: - if unattached_msg_tmpl: - names = getattr(args, "service", "None") - msg = unattached_msg_tmpl.format(name=", ".join(names)) + if msg_function: + command = getattr(args, "command", "") + service_names = getattr(args, "service", "") + msg = msg_function( + command=command, service_names=service_names, cfg=cfg + ) exception = exceptions.UnattachedError(msg) else: exception = exceptions.UnattachedError() @@ -416,10 +428,25 @@ def security_status_parser(parser): """Build or extend an arg parser for security-status subcommand.""" parser.prog = "security-status" - parser.description = ( - "Show security updates for packages in the system, including all" - " available ESM related content." + parser.formatter_class = argparse.RawDescriptionHelpFormatter + parser.description = textwrap.dedent( + """\ + Show security updates for packages in the system, including all + available ESM related content. + + Besides the list of security updates, it also shows a summary of the + installed packages based on the origin. + - main/restricted/universe/multiverse: packages from the Ubuntu archive + - ESM Infra/Apps: packages from ESM + - third-party: packages installed from non-Ubuntu sources + - unknown: packages which don't have an installation source (like local + deb packages or packages for which the source was removed) + + The summary contains basic information about UA and ESM. For a complete + status on UA services, run 'ua status' + """ ) + parser.add_argument( "--format", help=("Format for the output (json or yaml)"), @@ -441,7 +468,7 @@ parser._optionals.title = "Flags" parser.add_argument( "target", - choices=["contract", "config"], + choices=["contract", "config", "messages"], nargs="?", default=None, help=( @@ -449,8 +476,10 @@ " details from the server and perform any updates necessary." " `ua refresh config` will reload" " /etc/ubuntu-advantage/uaclient.conf and perform any changes" - " necessary. `ua refresh` is the equivalent of `ua refresh" - " config && ua refresh contract`." + " necessary. `ua refresh messages` will refresh" + " the APT and MOTD messages associated with UA." + " `ua refresh` is the equivalent of `ua refresh" + " config && ua refresh contract && ua refresh motd`." ), ) return parser @@ -503,7 +532,7 @@ return parser -def help_parser(parser): +def help_parser(parser, cfg: config.UAConfig): """Build or extend an arg parser for help subcommand.""" usage = USAGE_TMPL.format(name=NAME, command="help [service]") parser.usage = usage @@ -517,7 +546,7 @@ action="store", nargs="?", help="a service to view help output for. One of: {}".format( - ", ".join(entitlements.valid_services()) + ", ".join(entitlements.valid_services(cfg=cfg)) ), ) @@ -542,7 +571,7 @@ return parser -def enable_parser(parser): +def enable_parser(parser, cfg: config.UAConfig): """Build or extend an arg parser for enable subcommand.""" usage = USAGE_TMPL.format( name=NAME, command="enable []" @@ -558,7 +587,9 @@ nargs="+", help=( "the name(s) of the Ubuntu Advantage services to enable." - " One of: {}".format(", ".join(entitlements.valid_services())) + " One of: {}".format( + ", ".join(entitlements.valid_services(cfg=cfg)) + ) ), ) parser.add_argument( @@ -579,7 +610,7 @@ return parser -def disable_parser(parser): +def disable_parser(parser, cfg: config.UAConfig): """Build or extend an arg parser for disable subcommand.""" usage = USAGE_TMPL.format( name=NAME, command="disable []" @@ -595,7 +626,9 @@ nargs="+", help=( "the name(s) of the Ubuntu Advantage services to disable" - " One of: {}".format(", ".join(entitlements.valid_services())) + " One of: {}".format( + ", ".join(entitlements.valid_services(cfg=cfg)) + ) ), ) parser.add_argument( @@ -707,9 +740,7 @@ if not ret: event.service_failed(entitlement.name) - if reason is not None and isinstance( - reason, ua_status.CanDisableFailure - ): + if reason is not None and isinstance(reason, CanDisableFailure): if reason.message is not None: event.info(reason.message.msg) event.error( @@ -721,12 +752,12 @@ event.service_processed(entitlement.name) if update_status: - cfg.status() # Update the status cache + ua_status.status(cfg=cfg) # Update the status cache return ret -def get_valid_entitlement_names(names: List[str]): +def get_valid_entitlement_names(names: List[str], cfg: config.UAConfig): """Return a list of valid entitlement names. :param names: List of entitlements to validate @@ -736,7 +767,7 @@ for ent_name in names: if ent_name in entitlements.valid_services( - allow_beta=True, all_names=True + cfg=cfg, allow_beta=True, all_names=True ): entitlements_found.append(ent_name) @@ -750,7 +781,7 @@ :return: 0 on success, 1 otherwise """ - parser = get_parser() + parser = get_parser(cfg=cfg) subparser = parser._get_positional_actions()[0].choices["config"] valid_choices = subparser._get_positional_actions()[0].choices.keys() if args.command not in valid_choices: @@ -780,14 +811,25 @@ ) ) print( - "{key} {value}".format(key=args.key, value=getattr(cfg, args.key)) + "{key} {value}".format( + key=args.key, value=getattr(cfg, args.key, None) + ) ) return 0 col_width = str(max([len(x) for x in config.UA_CONFIGURABLE_KEYS]) + 1) row_tmpl = "{key: <" + col_width + "} {value}" + for key in config.UA_CONFIGURABLE_KEYS: - print(row_tmpl.format(key=key, value=getattr(cfg, key))) + print(row_tmpl.format(key=key, value=getattr(cfg, key, None))) + + if (cfg.global_apt_http_proxy or cfg.global_apt_https_proxy) and ( + cfg.ua_apt_http_proxy or cfg.ua_apt_https_proxy + ): + print( + "\nError: Setting global apt proxy and ua scoped apt proxy at the" + " same time is unsupported. No apt proxy is set." + ) @assert_root @@ -796,11 +838,10 @@ @return: 0 on success, 1 otherwise """ - from uaclient.apt import setup_apt_proxy from uaclient.entitlements.livepatch import configure_livepatch_proxy from uaclient.snap import configure_snap_proxy - parser = get_parser() + parser = get_parser(cfg=cfg) config_parser = parser._get_positional_actions()[0].choices["config"] subparser = config_parser._get_positional_actions()[0].choices["set"] try: @@ -832,23 +873,63 @@ # Only set livepatch proxy if livepatch is enabled entitlement = entitlements.livepatch.LivepatchEntitlement(cfg) livepatch_status, _ = entitlement.application_status() - if livepatch_status == ua_status.ApplicationStatus.ENABLED: + if livepatch_status == ApplicationStatus.ENABLED: configure_livepatch_proxy(**kwargs) - elif set_key in ("apt_http_proxy", "apt_https_proxy"): + elif set_key in cfg.ua_scoped_proxy_options: + protocol_type = set_key.split("_")[2] + if protocol_type == "http": + validate_url = util.PROXY_VALIDATION_APT_HTTP_URL + else: + validate_url = util.PROXY_VALIDATION_APT_HTTPS_URL + util.validate_proxy(protocol_type, set_value, validate_url) + unset_current = bool( + cfg.global_apt_http_proxy or cfg.global_apt_https_proxy + ) + if unset_current: + print( + messages.WARNING_APT_PROXY_OVERWRITE.format( + current_proxy="ua scoped apt", previous_proxy="global apt" + ) + ) + configure_apt_proxy(cfg, AptProxyScope.UACLIENT, set_key, set_value) + cfg.global_apt_http_proxy = None + cfg.global_apt_https_proxy = None + + elif set_key in ( + cfg.deprecated_global_scoped_proxy_options + + cfg.global_scoped_proxy_options + ): # setup_apt_proxy is destructive for unprovided values. Source complete # current config values from uaclient.conf before applying set_value. - protocol_type = set_key.split("_")[1] + + protocol_type = "https" if "https" in set_key else "http" if protocol_type == "http": validate_url = util.PROXY_VALIDATION_APT_HTTP_URL else: validate_url = util.PROXY_VALIDATION_APT_HTTPS_URL + + if set_key in cfg.deprecated_global_scoped_proxy_options: + print( + messages.WARNING_APT_PROXY_SETUP.format( + protocol_type=protocol_type + ) + ) + set_key = "global_" + set_key + util.validate_proxy(protocol_type, set_value, validate_url) - kwargs = { - "http_proxy": cfg.apt_http_proxy, - "https_proxy": cfg.apt_https_proxy, - } - kwargs[set_key[4:]] = set_value - setup_apt_proxy(**kwargs) + + unset_current = bool(cfg.ua_apt_http_proxy or cfg.ua_apt_https_proxy) + + if unset_current: + print( + messages.WARNING_APT_PROXY_OVERWRITE.format( + current_proxy="global apt", previous_proxy="ua scoped apt" + ) + ) + configure_apt_proxy(cfg, AptProxyScope.GLOBAL, set_key, set_value) + cfg.ua_apt_http_proxy = None + cfg.ua_apt_https_proxy = None + elif set_key in ( "update_messaging_timer", "update_status_timer", @@ -877,12 +958,12 @@ @return: 0 on success, 1 otherwise """ - from uaclient.apt import setup_apt_proxy + from uaclient.apt import AptProxyScope from uaclient.entitlements.livepatch import unconfigure_livepatch_proxy from uaclient.snap import unconfigure_snap_proxy if args.key not in config.UA_CONFIGURABLE_KEYS: - parser = get_parser() + parser = get_parser(cfg=cfg) config_parser = parser._get_positional_actions()[0].choices["config"] subparser = config_parser._get_positional_actions()[0].choices["unset"] subparser.print_help() @@ -897,22 +978,60 @@ # Only unset livepatch proxy if livepatch is enabled entitlement = entitlements.livepatch.LivepatchEntitlement(cfg) livepatch_status, _ = entitlement.application_status() - if livepatch_status == ua_status.ApplicationStatus.ENABLED: + if livepatch_status == ApplicationStatus.ENABLED: unconfigure_livepatch_proxy(protocol_type=protocol_type) - elif args.key in ("apt_http_proxy", "apt_https_proxy"): - kwargs = { - "http_proxy": cfg.apt_http_proxy, - "https_proxy": cfg.apt_https_proxy, - } - kwargs[args.key[4:]] = None - setup_apt_proxy(**kwargs) + elif args.key in cfg.ua_scoped_proxy_options: + configure_apt_proxy(cfg, AptProxyScope.UACLIENT, args.key, None) + elif args.key in ( + cfg.deprecated_global_scoped_proxy_options + + cfg.global_scoped_proxy_options + ): + if args.key in cfg.deprecated_global_scoped_proxy_options: + protocol_type = "https" if "https" in args.key else "http" + event.info( + messages.WARNING_APT_PROXY_SETUP.format( + protocol_type=protocol_type + ) + ) + args.key = "global_" + args.key + configure_apt_proxy(cfg, AptProxyScope.GLOBAL, args.key, None) + setattr(cfg, args.key, None) return 0 +def _create_enable_disable_unattached_msg(command, service_names, cfg): + """Generates a custom message for enable/disable commands when unattached. + + Takes into consideration if the services exist or not, and notify the user + accordingly.""" + (entitlements_found, entitlements_not_found) = get_valid_entitlement_names( + names=service_names, cfg=cfg + ) + if entitlements_found and entitlements_not_found: + msg = messages.MIXED_SERVICES_FAILURE_UNATTACHED + msg = msg.format( + valid_service=", ".join(entitlements_found), + operation=command, + invalid_service=", ".join(entitlements_not_found), + service_msg="", + ) + elif entitlements_found: + msg = messages.VALID_SERVICE_FAILURE_UNATTACHED.format( + valid_service=", ".join(entitlements_found) + ) + else: + msg = messages.INVALID_SERVICE_OP_FAILURE.format( + operation=command, + invalid_service=", ".join(entitlements_not_found), + service_msg="See https://ubuntu.com/advantage", + ) + return msg + + @verify_json_format_args @assert_root -@assert_attached(messages.ENABLE_FAILURE_UNATTACHED) +@assert_attached(_create_enable_disable_unattached_msg) @assert_lock_file("ua disable") def action_disable(args, *, cfg, **kwargs): """Perform the disable action on a list of entitlements. @@ -921,12 +1040,12 @@ """ names = getattr(args, "service", []) entitlements_found, entitlements_not_found = get_valid_entitlement_names( - names + names, cfg ) ret = True for ent_name in entitlements_found: - ent_cls = entitlements.entitlement_factory(ent_name) + ent_cls = entitlements.entitlement_factory(cfg=cfg, name=ent_name) ent = ent_cls(cfg, assume_yes=args.assume_yes) ret &= _perform_disable(ent, cfg, assume_yes=args.assume_yes) @@ -934,7 +1053,7 @@ if entitlements_not_found: valid_names = ( "Try " - + ", ".join(entitlements.valid_services(allow_beta=True)) + + ", ".join(entitlements.valid_services(cfg=cfg, allow_beta=True)) + "." ) service_msg = "\n".join( @@ -947,7 +1066,7 @@ ) raise exceptions.InvalidServiceToDisableError( operation="disable", - name=", ".join(entitlements_not_found), + invalid_service=", ".join(entitlements_not_found), service_msg=service_msg, ) @@ -956,13 +1075,15 @@ def _create_enable_entitlements_not_found_message( - entitlements_not_found, *, allow_beta: bool + entitlements_not_found, cfg: config.UAConfig, *, allow_beta: bool ) -> messages.NamedMessage: """ Constructs the MESSAGE_INVALID_SERVICE_OP_FAILURE message based on the attempted services and valid services. """ - valid_services_names = entitlements.valid_services(allow_beta=allow_beta) + valid_services_names = entitlements.valid_services( + cfg=cfg, allow_beta=allow_beta + ) valid_names = ", ".join(valid_services_names) service_msg = "\n".join( textwrap.wrap( @@ -975,14 +1096,14 @@ return messages.INVALID_SERVICE_OP_FAILURE.format( operation="enable", - name=", ".join(entitlements_not_found), + invalid_service=", ".join(entitlements_not_found), service_msg=service_msg, ) @verify_json_format_args @assert_root -@assert_attached(messages.ENABLE_FAILURE_UNATTACHED) +@assert_attached(_create_enable_disable_unattached_msg) @assert_lock_file("ua enable") def action_enable(args, *, cfg, **kwargs): """Perform the enable action on a named entitlement. @@ -999,7 +1120,7 @@ names = getattr(args, "service", []) entitlements_found, entitlements_not_found = get_valid_entitlement_names( - names + names, cfg ) ret = True for ent_name in entitlements_found: @@ -1007,12 +1128,12 @@ ent_ret, reason = actions.enable_entitlement_by_name( cfg, ent_name, assume_yes=args.assume_yes, allow_beta=args.beta ) - cfg.status() # Update the status cache + ua_status.status(cfg=cfg) # Update the status cache if ( not ent_ret and reason is not None - and isinstance(reason, ua_status.CanEnableFailure) + and isinstance(reason, CanEnableFailure) ): if reason.message is not None: event.info(reason.message.msg) @@ -1021,7 +1142,7 @@ error_code=reason.message.name, service=ent_name, ) - if reason.reason == ua_status.CanEnableFailureReason.IS_BETA: + if reason.reason == CanEnableFailureReason.IS_BETA: # if we failed because ent is in beta and there was no # allow_beta flag/config, pretend it doesn't exist entitlements_not_found.append(ent_name) @@ -1040,7 +1161,7 @@ if entitlements_not_found: msg = _create_enable_entitlements_not_found_message( - entitlements_not_found, allow_beta=args.beta + entitlements_not_found, cfg=cfg, allow_beta=args.beta ) event.services_failed(entitlements_not_found) raise exceptions.UserFacingError(msg=msg.msg, msg_code=msg.name) @@ -1111,7 +1232,7 @@ _perform_disable(ent, cfg, assume_yes=assume_yes, update_status=False) cfg.delete_cache() - jobs.enable_license_check_if_applicable(cfg) + daemon.start() update_apt_and_motd_messages(cfg) event.info(messages.DETACH_SUCCESS) event.process_events() @@ -1135,7 +1256,7 @@ else: event.info(messages.ATTACH_SUCCESS_NO_CONTRACT_NAME) - jobs.disable_license_check_if_applicable(cfg) + daemon.stop() status, _ret = actions.status(cfg) output = ua_status.format_tabular(status) @@ -1244,21 +1365,12 @@ try: actions.attach_with_token(cfg, token=token, allow_enable=allow_enable) except exceptions.UrlError: - msg = messages.ATTACH_FAILURE - event.info(msg.msg) - event.error(error_msg=msg.msg, error_code=msg.name) - event.process_events() - return 1 - except exceptions.UserFacingError as exc: - event.info(exc.msg) - event.error(error_msg=exc.msg, error_code=exc.msg_code) - event.process_events() - return 1 + raise exceptions.AttachError() else: ret = 0 if enable_services_override is not None and args.auto_enable: found, not_found = get_valid_entitlement_names( - enable_services_override + enable_services_override, cfg ) for name in found: ent_ret, reason = actions.enable_entitlement_by_name( @@ -1282,7 +1394,7 @@ if not_found: msg = _create_enable_entitlements_not_found_message( - not_found, allow_beta=True + not_found, cfg=cfg, allow_beta=True ) event.info(msg.msg, file_type=sys.stderr) event.error(error_msg=msg.msg, error_code=msg.name) @@ -1349,7 +1461,7 @@ cfg.cfg_path or DEFAULT_CONFIG_FILE, cfg.log_file, cfg.timer_log_file, - cfg.license_check_log_file, + cfg.daemon_log_file, cfg.data_path("jobs-status"), CLOUD_BUILD_INFO, *( @@ -1370,7 +1482,7 @@ results.add(output_dir, arcname="logs/") -def get_parser(): +def get_parser(cfg: config.UAConfig): base_desc = __doc__ parser = UAArgumentParser( prog=NAME, @@ -1430,14 +1542,14 @@ "disable", help="disable a specific Ubuntu Advantage service on this machine", ) - disable_parser(parser_disable) + disable_parser(parser_disable, cfg=cfg) parser_disable.set_defaults(action=action_disable) parser_enable = subparsers.add_parser( "enable", help="enable a specific Ubuntu Advantage service on this machine", ) - enable_parser(parser_enable) + enable_parser(parser_enable, cfg=cfg) parser_enable.set_defaults(action=action_enable) parser_fix = subparsers.add_parser( @@ -1458,7 +1570,7 @@ "help", help="show detailed information about Ubuntu Advantage services", ) - help_parser(parser_help) + help_parser(parser_help, cfg=cfg) parser_help.set_defaults(action=action_help) parser_refresh = subparsers.add_parser( @@ -1486,7 +1598,17 @@ show_beta = args.all if args else False token = args.simulate_with_token if args else None active_value = ua_status.UserFacingConfigStatus.ACTIVE.value - + if cfg.is_attached: + try: + if contract.is_contract_changed(cfg): + cfg.add_notice("", messages.NOTICE_REFRESH_CONTRACT_WARNING) + except exceptions.UrlError as e: + with util.disable_log_to_console(): + err_msg = messages.UPDATE_CHECK_CONTRACT_FAILURE.format( + reason=str(e) + ) + logging.warning(err_msg) + event.warning(err_msg) status, ret = actions.status( cfg, simulate_with_token=token, show_beta=show_beta ) @@ -1540,6 +1662,21 @@ print(messages.REFRESH_CONTRACT_SUCCESS) +def _action_refresh_messages(_args, cfg: config.UAConfig): + # Not performing any exception handling here since both of these + # functions should raise UserFacingError exceptions, which are + # covered by the main_error_handler decorator + try: + update_apt_and_motd_messages(cfg) + refresh_motd() + except Exception as exc: + with util.disable_log_to_console(): + logging.exception(exc) + raise exceptions.UserFacingError(messages.REFRESH_MESSAGES_FAILURE) + else: + print(messages.REFRESH_MESSAGES_SUCCESS) + + @assert_root @assert_lock_file("ua refresh") def action_refresh(args, *, cfg: config.UAConfig): @@ -1548,22 +1685,47 @@ if args.target is None or args.target == "contract": _action_refresh_contract(args, cfg) + cfg.remove_notice("", messages.NOTICE_REFRESH_CONTRACT_WARNING) + + if args.target is None or args.target == "messages": + _action_refresh_messages(args, cfg) return 0 +def configure_apt_proxy( + cfg: config.UAConfig, scope: AptProxyScope, set_key: str, set_value: str +) -> None: + """ + Handles setting part the apt proxies - global and uaclient scoped proxies + """ + if scope == AptProxyScope.GLOBAL: + http_proxy = cfg.global_apt_http_proxy + https_proxy = cfg.global_apt_https_proxy + elif scope == AptProxyScope.UACLIENT: + http_proxy = cfg.ua_apt_http_proxy + https_proxy = cfg.ua_apt_https_proxy + if "https" in set_key: + https_proxy = set_value + else: + http_proxy = set_value + setup_apt_proxy( + http_proxy=http_proxy, https_proxy=https_proxy, proxy_scope=scope + ) + + def action_help(args, *, cfg): service = args.service show_all = args.all if not service: - get_parser().print_help(show_all=show_all) + get_parser(cfg=cfg).print_help(show_all=show_all) return 0 if not cfg: cfg = config.UAConfig() - help_response = cfg.help(service) + help_response = ua_status.help(cfg, service) if args.format == "json": print(json.dumps(help_response)) @@ -1662,7 +1824,12 @@ except exceptions.UserFacingError as exc: with util.disable_log_to_console(): logging.error(exc.msg) - event.error(error_msg=exc.msg, error_code=exc.msg_code) + + event.error( + error_msg=exc.msg, + error_code=exc.msg_code, + additional_info=exc.additional_info, + ) event.info(info_msg="{}".format(exc.msg), file_type=sys.stderr) if not isinstance(exc, exceptions.LockHeldError): # Only clear the lock if it is ours. @@ -1689,7 +1856,8 @@ def main(sys_argv=None): if not sys_argv: sys_argv = sys.argv - parser = get_parser() + cfg = config.UAConfig() + parser = get_parser(cfg=cfg) cli_arguments = sys_argv[1:] if not cli_arguments: parser.print_usage() @@ -1697,7 +1865,6 @@ sys.exit(1) args = parser.parse_args(args=cli_arguments) set_event_mode(args) - cfg = config.UAConfig() http_proxy = cfg.http_proxy https_proxy = cfg.https_proxy @@ -1706,9 +1873,14 @@ log_level = cfg.log_level console_level = logging.DEBUG if args.debug else logging.INFO setup_logging(console_level, log_level, cfg.log_file) + logging.debug( util.redact_sensitive_logs("Executed with sys.argv: %r" % sys_argv) ) + + with util.disable_log_to_console(): + cfg.warn_about_invalid_keys() + ua_environment = [ "{}={}".format(k, v) for k, v in sorted(os.environ.items()) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/aws.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/aws.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/aws.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/aws.py 2022-05-18 19:44:15.000000000 +0000 @@ -108,3 +108,10 @@ if "ec2" == dmi_uuid[0:3] == dmi_serial[0:3]: return True return False + + def should_poll_for_pro_license(self) -> bool: + """Unsupported""" + return False + + def is_pro_license_present(self, *, wait_for_change: bool) -> bool: + raise exceptions.InPlaceUpgradeNotSupportedError() diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/azure.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/azure.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/azure.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/azure.py 2022-05-18 19:44:15.000000000 +0000 @@ -2,7 +2,7 @@ from typing import Any, Dict from urllib.error import HTTPError -from uaclient import util +from uaclient import exceptions, util from uaclient.clouds import AutoAttachCloudInstance IMDS_BASE_URL = "http://169.254.169.254/metadata/" @@ -48,3 +48,10 @@ if AZURE_CHASSIS_ASSET_TAG == chassis_asset_tag.strip(): return True return os.path.exists(AZURE_OVF_ENV_FILE) + + def should_poll_for_pro_license(self) -> bool: + """Unsupported""" + return False + + def is_pro_license_present(self, *, wait_for_change: bool) -> bool: + raise exceptions.InPlaceUpgradeNotSupportedError() diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/gcp.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/gcp.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/gcp.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/gcp.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,17 +1,26 @@ import base64 import json +import logging import os -from typing import Any, Dict, List +from typing import Any, Dict, List, Optional # noqa: F401 from urllib.error import HTTPError -from uaclient import util +from uaclient import exceptions, util from uaclient.clouds import AutoAttachCloudInstance +LOG = logging.getLogger("ua.clouds.gcp") + TOKEN_URL = ( "http://metadata/computeMetadata/v1/instance/service-accounts/" "default/identity?audience=contracts.canonical.com&" "format=full&licenses=TRUE" ) +LICENSES_URL = ( + "http://metadata.google.internal/computeMetadata/v1/instance/licenses/" + "?recursive=true" +) +WAIT_FOR_CHANGE = "&wait_for_change=true" +LAST_ETAG = "&last_etag={etag}" DMI_PRODUCT_NAME = "/sys/class/dmi/id/product_name" GCP_PRODUCT_NAME = "Google Compute Engine" @@ -25,6 +34,10 @@ class UAAutoAttachGCPInstance(AutoAttachCloudInstance): + def __init__(self): + # store ETAG + # https://cloud.google.com/compute/docs/metadata/querying-metadata#etags # noqa + self.etag = None # type: Optional[str] # mypy does not handle @property around inner decorators # https://github.com/python/mypy/issues/1362 @@ -66,3 +79,34 @@ .get("compute_engine", {}) .get("license_id", []) ) + + def should_poll_for_pro_license(self) -> bool: + series = util.get_platform_info()["series"] + if series not in GCP_LICENSES: + LOG.info("This series isn't supported for GCP auto-attach.") + return False + return True + + def is_pro_license_present(self, *, wait_for_change: bool) -> bool: + url = LICENSES_URL + + if wait_for_change: + url += WAIT_FOR_CHANGE + if self.etag: + url += LAST_ETAG.format(etag=self.etag) + + try: + licenses, headers = util.readurl( + url, headers={"Metadata-Flavor": "Google"} + ) + except HTTPError as e: + LOG.error(e) + if e.code == 400: + raise exceptions.CancelProLicensePolling() + else: + raise exceptions.DelayProLicensePolling() + license_ids = [license["id"] for license in licenses] + self.etag = headers.get("ETag", None) + + series = util.get_platform_info()["series"] + return GCP_LICENSES.get(series) in license_ids diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/identity.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/identity.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/identity.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/identity.py 2022-05-18 19:44:15.000000000 +0000 @@ -6,9 +6,6 @@ from uaclient import clouds, exceptions, util from uaclient.config import apply_config_settings_override -# Mapping of datasource names to cloud-id responses. Trusty compat with Xenial+ -DATASOURCE_TO_CLOUD_ID = {"azurenet": "azure", "ec2": "aws", "gce": "gcp"} - CLOUD_TYPE_TO_TITLE = { "aws": "AWS", "aws-china": "AWS China", diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/__init__.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/__init__.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/__init__.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/__init__.py 2022-05-18 19:44:15.000000000 +0000 @@ -20,3 +20,18 @@ def is_viable(self) -> bool: """Return True if the machine is a viable AutoAttachCloudInstance.""" pass + + @abc.abstractmethod + def should_poll_for_pro_license(self) -> bool: + """ + Cloud-specific checks for whether the daemon should continously poll + for Ubuntu Pro licenses. + """ + pass + + @abc.abstractmethod + def is_pro_license_present(self, *, wait_for_change: bool) -> bool: + """ + Check for an Ubuntu Pro license + """ + pass diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/tests/test_aws.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/tests/test_aws.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/tests/test_aws.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/tests/test_aws.py 2022-05-18 19:44:15.000000000 +0000 @@ -302,3 +302,14 @@ for expected_log in expected_logs: assert expected_log in caplog_text() + + def test_unsupported_should_poll_for_pro_license(self): + """Unsupported""" + instance = UAAutoAttachAWSInstance() + assert not instance.should_poll_for_pro_license() + + def test_unsupported_is_pro_license_present(self): + """Unsupported""" + instance = UAAutoAttachAWSInstance() + with pytest.raises(exceptions.InPlaceUpgradeNotSupportedError): + instance.is_pro_license_present(wait_for_change=False) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/tests/test_azure.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/tests/test_azure.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/tests/test_azure.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/tests/test_azure.py 2022-05-18 19:44:15.000000000 +0000 @@ -6,6 +6,7 @@ import pytest from uaclient.clouds.azure import IMDS_BASE_URL, UAAutoAttachAzureInstance +from uaclient.exceptions import InPlaceUpgradeNotSupportedError M_PATH = "uaclient.clouds.azure." @@ -122,3 +123,14 @@ load_file.side_effect = fake_load_file instance = UAAutoAttachAzureInstance() assert viable is instance.is_viable + + def test_unsupported_should_poll_for_pro_license(self): + """Unsupported""" + instance = UAAutoAttachAzureInstance() + assert not instance.should_poll_for_pro_license() + + def test_unsupported_is_pro_license_present(self): + """Unsupported""" + instance = UAAutoAttachAzureInstance() + with pytest.raises(InPlaceUpgradeNotSupportedError): + instance.is_pro_license_present(wait_for_change=False) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/tests/test_gcp.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/tests/test_gcp.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/clouds/tests/test_gcp.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/clouds/tests/test_gcp.py 2022-05-18 19:44:15.000000000 +0000 @@ -5,7 +5,13 @@ import mock import pytest -from uaclient.clouds.gcp import TOKEN_URL, UAAutoAttachGCPInstance +from uaclient.clouds.gcp import ( + LAST_ETAG, + LICENSES_URL, + TOKEN_URL, + WAIT_FOR_CHANGE, + UAAutoAttachGCPInstance, +) M_PATH = "uaclient.clouds.gcp." @@ -102,3 +108,177 @@ instance = UAAutoAttachGCPInstance() assert viable is instance.is_viable + + @pytest.mark.parametrize( + "existing_etag, wait_for_change, metadata_response, platform_info," + " expected_etag, expected_result, expected_readurl", + ( + ( + None, + False, + ([], {}), + {"series": "xenial"}, + None, + False, + [ + mock.call( + LICENSES_URL, headers={"Metadata-Flavor": "Google"} + ) + ], + ), + ( + None, + False, + ([{"id": "8045211386737108299"}], {}), + {"series": "xenial"}, + None, + True, + [ + mock.call( + LICENSES_URL, headers={"Metadata-Flavor": "Google"} + ) + ], + ), + ( + None, + False, + ([{"id": "8045211386737108299"}], {}), + {"series": "bionic"}, + None, + False, + [ + mock.call( + LICENSES_URL, headers={"Metadata-Flavor": "Google"} + ) + ], + ), + ( + None, + False, + ([{"id": "6022427724719891830"}], {}), + {"series": "bionic"}, + None, + True, + [ + mock.call( + LICENSES_URL, headers={"Metadata-Flavor": "Google"} + ) + ], + ), + ( + None, + False, + ([{"id": "599959289349842382"}], {}), + {"series": "focal"}, + None, + True, + [ + mock.call( + LICENSES_URL, headers={"Metadata-Flavor": "Google"} + ) + ], + ), + ( + None, + False, + ([{"id": "8045211386737108299"}], {"ETag": "test-etag"}), + {"series": "xenial"}, + "test-etag", + True, + [ + mock.call( + LICENSES_URL, headers={"Metadata-Flavor": "Google"} + ) + ], + ), + ( + None, + False, + ([{"id": "wrong"}], {"ETag": "test-etag"}), + {"series": "xenial"}, + "test-etag", + False, + [ + mock.call( + LICENSES_URL, headers={"Metadata-Flavor": "Google"} + ) + ], + ), + ( + None, + True, + ([{"id": "8045211386737108299"}], {"ETag": "test-etag"}), + {"series": "xenial"}, + "test-etag", + True, + [ + mock.call( + LICENSES_URL + WAIT_FOR_CHANGE, + headers={"Metadata-Flavor": "Google"}, + ) + ], + ), + ( + "existing-etag", + True, + ([{"id": "8045211386737108299"}], {"ETag": "test-etag"}), + {"series": "xenial"}, + "test-etag", + True, + [ + mock.call( + LICENSES_URL + + WAIT_FOR_CHANGE + + LAST_ETAG.format(etag="existing-etag"), + headers={"Metadata-Flavor": "Google"}, + ) + ], + ), + ), + ) + @mock.patch(M_PATH + "util.get_platform_info") + @mock.patch(M_PATH + "util.readurl") + def test_is_license_present( + self, + m_readurl, + m_get_platform_info, + existing_etag, + wait_for_change, + metadata_response, + platform_info, + expected_etag, + expected_result, + expected_readurl, + ): + instance = UAAutoAttachGCPInstance() + instance.etag = existing_etag + m_readurl.return_value = metadata_response + m_get_platform_info.return_value = platform_info + + result = instance.is_pro_license_present( + wait_for_change=wait_for_change + ) + + assert expected_result == result + assert expected_etag == instance.etag + + assert expected_readurl == m_readurl.call_args_list + + @pytest.mark.parametrize( + "platform_info, expected_result", + ( + ({"series": "xenial"}, True), + ({"series": "bionic"}, True), + ({"series": "focal"}, True), + ({"series": "impish"}, False), + ({"series": "jammy"}, True), + ), + ) + @mock.patch(M_PATH + "util.get_platform_info") + def test_should_poll_for_license( + self, m_get_platform_info, platform_info, expected_result + ): + m_get_platform_info.return_value = platform_info + instance = UAAutoAttachGCPInstance() + result = instance.should_poll_for_pro_license() + assert expected_result == result diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/config.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/config.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/config.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/config.py 2022-05-18 19:44:15.000000000 +0000 @@ -3,25 +3,15 @@ import logging import os import re -from collections import OrderedDict, namedtuple -from datetime import datetime, timezone -from functools import wraps -from typing import Any, Dict, List, Optional, Tuple, cast +from collections import namedtuple +from datetime import datetime +from functools import lru_cache, wraps +from typing import Any, Callable, Dict, Optional, Tuple, TypeVar import yaml -from uaclient import ( - apt, - event_logger, - exceptions, - messages, - snap, - status, - util, - version, -) +from uaclient import apt, event_logger, exceptions, messages, snap, util from uaclient.defaults import ( - ATTACH_FAIL_DATE_FORMAT, BASE_CONTRACT_URL, BASE_SECURITY_URL, CONFIG_DEFAULTS, @@ -29,36 +19,6 @@ DEFAULT_CONFIG_FILE, ) -DEFAULT_STATUS = { - "_doc": "Content provided in json response is currently considered" - " Experimental and may change", - "_schema_version": "0.1", - "version": version.get_version(), - "machine_id": None, - "attached": False, - "effective": None, - "expires": None, # TODO Will this break something? - "origin": None, - "services": [], - "execution_status": status.UserFacingConfigStatus.INACTIVE.value, - "execution_details": messages.NO_ACTIVE_OPERATIONS, - "notices": [], - "contract": { - "id": "", - "name": "", - "created_at": "", - "products": [], - "tech_support_level": status.UserFacingStatus.INAPPLICABLE.value, - }, - "account": { - "name": "", - "id": "", - "created_at": "", - "external_account_ids": [], - }, - "simulated": False, -} # type: Dict[str, Any] - LOG = logging.getLogger(__name__) PRIVATE_SUBDIR = "private" @@ -74,6 +34,10 @@ "https_proxy", "apt_http_proxy", "apt_https_proxy", + "ua_apt_http_proxy", + "ua_apt_https_proxy", + "global_apt_http_proxy", + "global_apt_https_proxy", "update_messaging_timer", "update_status_timer", "metering_timer", @@ -89,7 +53,7 @@ "security_url", "settings_overrides", "timer_log_file", - "license_check_log_file", + "daemon_log_file", "ua_config", ) @@ -100,6 +64,14 @@ event = event_logger.get_event_logger() +# needed for solving mypy errors dealing with _lru_cache_wrapper +# Found at https://github.com/python/mypy/issues/5858#issuecomment-454144705 +S = TypeVar("S", bound=str) + + +def str_cache(func: Callable[..., S]) -> S: + return lru_cache()(func) # type: ignore + class UAConfig: @@ -113,7 +85,6 @@ "marker-reboot-cmds": DataPath( "marker-reboot-cmds-required", False, False ), - "marker-license-check": DataPath("marker-license-check", False, True), "services-once-enabled": DataPath( "services-once-enabled", False, True ), @@ -123,15 +94,25 @@ _entitlements = None # caching to avoid repetitive file reads _machine_token = None # caching to avoid repetitive file reading _contract_expiry_datetime = None + ua_scoped_proxy_options = ("ua_apt_http_proxy", "ua_apt_https_proxy") + global_scoped_proxy_options = ( + "global_apt_http_proxy", + "global_apt_https_proxy", + ) + deprecated_global_scoped_proxy_options = ( + "apt_http_proxy", + "apt_https_proxy", + ) def __init__(self, cfg: Dict[str, Any] = None, series: str = None) -> None: """""" if cfg: self.cfg_path = None self.cfg = cfg + self.invalid_keys = None else: self.cfg_path = get_config_path() - self.cfg = parse_config(self.cfg_path) + self.cfg, self.invalid_keys = parse_config(self.cfg_path) self.series = series @@ -174,25 +155,71 @@ self.write_cfg() @property - def apt_http_proxy(self) -> Optional[str]: - return self.cfg.get("ua_config", {}).get("apt_http_proxy") + def ua_apt_https_proxy(self) -> Optional[str]: + return self.cfg.get("ua_config", {}).get("ua_apt_https_proxy") - @apt_http_proxy.setter - def apt_http_proxy(self, value: str): + @ua_apt_https_proxy.setter + def ua_apt_https_proxy(self, value: str): if "ua_config" not in self.cfg: self.cfg["ua_config"] = {} - self.cfg["ua_config"]["apt_http_proxy"] = value + self.cfg["ua_config"]["ua_apt_https_proxy"] = value self.write_cfg() @property - def apt_https_proxy(self) -> Optional[str]: - return self.cfg.get("ua_config", {}).get("apt_https_proxy") + def ua_apt_http_proxy(self) -> Optional[str]: + return self.cfg.get("ua_config", {}).get("ua_apt_http_proxy") + + @ua_apt_http_proxy.setter + def ua_apt_http_proxy(self, value: str): + if "ua_config" not in self.cfg: + self.cfg["ua_config"] = {} + self.cfg["ua_config"]["ua_apt_http_proxy"] = value + self.write_cfg() - @apt_https_proxy.setter - def apt_https_proxy(self, value: str): + @property # type: ignore + @str_cache + def global_apt_http_proxy(self) -> Optional[str]: + global_val = self.cfg.get("ua_config", {}).get("global_apt_http_proxy") + if global_val: + return global_val + + old_apt_val = self.cfg.get("ua_config", {}).get("apt_http_proxy") + if old_apt_val: + event.info(messages.WARNING_DEPRECATED_APT_HTTP) + return old_apt_val + return None + + @global_apt_http_proxy.setter + def global_apt_http_proxy(self, value: str): if "ua_config" not in self.cfg: self.cfg["ua_config"] = {} - self.cfg["ua_config"]["apt_https_proxy"] = value + self.cfg["ua_config"]["global_apt_http_proxy"] = value + self.cfg["ua_config"]["apt_http_proxy"] = None + UAConfig.global_apt_http_proxy.fget.cache_clear() # type: ignore + self.write_cfg() + + @property # type: ignore + @str_cache + def global_apt_https_proxy(self) -> Optional[str]: + global_val = self.cfg.get("ua_config", {}).get( + "global_apt_https_proxy" + ) + if global_val: + return global_val + + old_apt_val = self.cfg.get("ua_config", {}).get("apt_https_proxy") + if old_apt_val: + event.info(messages.WARNING_DEPRECATED_APT_HTTPS) + return old_apt_val + return None + + @global_apt_https_proxy.setter + def global_apt_https_proxy(self, value: str): + if "ua_config" not in self.cfg: + self.cfg["ua_config"] = {} + self.cfg["ua_config"]["global_apt_https_proxy"] = value + self.cfg["ua_config"]["apt_https_proxy"] = None + UAConfig.global_apt_https_proxy.fget.cache_clear() # type: ignore self.write_cfg() @property @@ -228,6 +255,35 @@ self.cfg["ua_config"]["metering_timer"] = value self.write_cfg() + @property + def poll_for_pro_license(self) -> bool: + # TODO: when polling is supported + # 1. change default here to True + # 2. add this field to UA_CONFIGURABLE_KEYS + return self.cfg.get("ua_config", {}).get("poll_for_pro_license", False) + + @poll_for_pro_license.setter + def poll_for_pro_license(self, value: bool): + if "ua_config" not in self.cfg: + self.cfg["ua_config"] = {} + self.cfg["ua_config"]["poll_for_pro_license"] = value + self.write_cfg() + + @property + def polling_error_retry_delay(self) -> int: + # TODO: when polling is supported + # 1. add this field to UA_CONFIGURABLE_KEYS + return self.cfg.get("ua_config", {}).get( + "polling_error_retry_delay", 600 + ) + + @polling_error_retry_delay.setter + def polling_error_retry_delay(self, value: int): + if "ua_config" not in self.cfg: + self.cfg["ua_config"] = {} + self.cfg["ua_config"]["polling_error_retry_delay"] = value + self.write_cfg() + def check_lock_info(self) -> Tuple[int, str]: """Return lock info if config lock file is present the lock is active. @@ -319,24 +375,33 @@ ) @property - def license_check_log_file(self): + def daemon_log_file(self): return self.cfg.get( - "license_check_log_file", CONFIG_DEFAULTS["license_check_log_file"] + "daemon_log_file", CONFIG_DEFAULTS["daemon_log_file"] ) @property def entitlements(self): + """Return configured entitlements keyed by entitlement named""" + if self._entitlements: + return self._entitlements + if not self.machine_token: + return {} + self._entitlements = self.get_entitlements_from_token( + self.machine_token + ) + return self._entitlements + + @staticmethod + def get_entitlements_from_token(machine_token: Dict): """Return a dictionary of entitlements keyed by entitlement name. Return an empty dict if no entitlements are present. """ - if self._entitlements: - return self._entitlements - machine_token = self.machine_token if not machine_token: return {} - self._entitlements = {} + entitlements = {} contractInfo = machine_token.get("machineTokenInfo", {}).get( "contractInfo" ) @@ -357,9 +422,9 @@ entitlement_cfg["resourceToken"] = tokens_by_name[ entitlement_name ] - util.apply_contract_overrides(entitlement_cfg, self.series) - self._entitlements[entitlement_name] = entitlement_cfg - return self._entitlements + util.apply_contract_overrides(entitlement_cfg) + entitlements[entitlement_name] = entitlement_cfg + return entitlements @property def contract_expiry_datetime(self) -> datetime: @@ -568,456 +633,6 @@ mode = 0o644 util.write_file(filepath, content, mode=mode) - def _handle_beta_resources(self, show_beta, response) -> Dict[str, Any]: - """Remove beta services from response dict if needed""" - from uaclient.entitlements import entitlement_factory - - config_allow_beta = util.is_config_value_true( - config=self.cfg, path_to_value="features.allow_beta" - ) - show_beta |= config_allow_beta - if show_beta: - return response - - new_response = copy.deepcopy(response) - - released_resources = [] - for resource in new_response.get("services", {}): - resource_name = resource["name"] - try: - ent_cls = entitlement_factory(resource_name) - except exceptions.EntitlementNotFoundError: - """ - Here we cannot know the status of a service, - since it is not listed as a valid entitlement. - Therefore, we keep this service in the list, since - we cannot validate if it is a beta service or not. - """ - released_resources.append(resource) - continue - - enabled_status = status.UserFacingStatus.ACTIVE.value - if ( - not ent_cls.is_beta - or resource.get("status", "") == enabled_status - ): - released_resources.append(resource) - - if released_resources: - new_response["services"] = released_resources - - return new_response - - def _get_config_status(self) -> Dict[str, Any]: - """Return a dict with execution_status, execution_details and notices. - - Values for execution_status will be one of UserFacingConfigStatus - enum: - inactive, active, reboot-required - execution_details will provide more details about that state. - notices is a list of tuples with label and description items. - """ - userStatus = status.UserFacingConfigStatus - status_val = userStatus.INACTIVE.value - status_desc = messages.NO_ACTIVE_OPERATIONS - (lock_pid, lock_holder) = self.check_lock_info() - notices = self.read_cache("notices") or [] - if lock_pid > 0: - status_val = userStatus.ACTIVE.value - status_desc = messages.LOCK_HELD.format( - pid=lock_pid, lock_holder=lock_holder - ).msg - elif os.path.exists(self.data_path("marker-reboot-cmds")): - status_val = userStatus.REBOOTREQUIRED.value - operation = "configuration changes" - for label, description in notices: - if label == "Reboot required": - operation = description - break - status_desc = messages.ENABLE_REBOOT_REQUIRED_TMPL.format( - operation=operation - ) - return { - "execution_status": status_val, - "execution_details": status_desc, - "notices": notices, - "config_path": self.cfg_path, - "config": self.cfg, - } - - def _unattached_status(self) -> Dict[str, Any]: - """Return unattached status as a dict.""" - from uaclient.contract import get_available_resources - from uaclient.entitlements import entitlement_factory - - response = copy.deepcopy(DEFAULT_STATUS) - response["version"] = version.get_version(features=self.features) - - resources = get_available_resources(self) - for resource in resources: - if resource.get("available"): - available = status.UserFacingAvailability.AVAILABLE.value - else: - available = status.UserFacingAvailability.UNAVAILABLE.value - try: - ent_cls = entitlement_factory(resource.get("name", "")) - except exceptions.EntitlementNotFoundError: - LOG.debug( - "Ignoring availability of unknown service %s" - " from contract server", - resource.get("name", "without a 'name' key"), - ) - continue - - response["services"].append( - { - "name": resource.get("presentedAs", resource["name"]), - "description": ent_cls.description, - "available": available, - } - ) - response["services"].sort(key=lambda x: x.get("name", "")) - - return response - - def _attached_service_status( - self, ent, inapplicable_resources - ) -> Dict[str, Any]: - status_details = "" - description_override = None - contract_status = ent.contract_status() - if contract_status == status.ContractStatus.UNENTITLED: - ent_status = status.UserFacingStatus.UNAVAILABLE - else: - if ent.name in inapplicable_resources: - ent_status = status.UserFacingStatus.INAPPLICABLE - description_override = inapplicable_resources[ent.name] - else: - ent_status, details = ent.user_facing_status() - if details: - status_details = details.msg - - blocked_by = [ - { - "name": service.entitlement.name, - "reason_code": service.named_msg.name, - "reason": service.named_msg.msg, - } - for service in ent.blocking_incompatible_services() - ] - - return { - "name": ent.presentation_name, - "description": ent.description, - "entitled": contract_status.value, - "status": ent_status.value, - "status_details": status_details, - "description_override": description_override, - "available": "yes" - if ent.name not in inapplicable_resources - else "no", - "blocked_by": blocked_by, - } - - def _attached_status(self) -> Dict[str, Any]: - """Return configuration of attached status as a dictionary.""" - from uaclient.contract import get_available_resources - from uaclient.entitlements import entitlement_factory - - response = copy.deepcopy(DEFAULT_STATUS) - machineTokenInfo = self.machine_token["machineTokenInfo"] - contractInfo = machineTokenInfo["contractInfo"] - tech_support_level = status.UserFacingStatus.INAPPLICABLE.value - response.update( - { - "version": version.get_version(features=self.features), - "machine_id": machineTokenInfo["machineId"], - "attached": True, - "origin": contractInfo.get("origin"), - "notices": self.read_cache("notices") or [], - "contract": { - "id": contractInfo["id"], - "name": contractInfo["name"], - "created_at": contractInfo.get("createdAt", ""), - "products": contractInfo.get("products", []), - "tech_support_level": tech_support_level, - }, - "account": { - "name": self.accounts[0]["name"], - "id": self.accounts[0]["id"], - "created_at": self.accounts[0].get("createdAt", ""), - "external_account_ids": self.accounts[0].get( - "externalAccountIDs", [] - ), - }, - } - ) - if contractInfo.get("effectiveTo"): - response["expires"] = self.contract_expiry_datetime - if contractInfo.get("effectiveFrom"): - response["effective"] = contractInfo["effectiveFrom"] - - resources = self.machine_token.get("availableResources") - if not resources: - resources = get_available_resources(self) - - inapplicable_resources = { - resource["name"]: resource.get("description") - for resource in sorted(resources, key=lambda x: x.get("name", "")) - if not resource.get("available") - } - - for resource in resources: - try: - ent_cls = entitlement_factory(resource.get("name", "")) - except exceptions.EntitlementNotFoundError: - continue - ent = ent_cls(self) - response["services"].append( - self._attached_service_status(ent, inapplicable_resources) - ) - response["services"].sort(key=lambda x: x.get("name", "")) - - support = self.entitlements.get("support", {}).get("entitlement") - if support: - supportLevel = support.get("affordances", {}).get("supportLevel") - if supportLevel: - response["contract"]["tech_support_level"] = supportLevel - return response - - def _get_entitlement_information( - self, entitlements: List[Dict[str, Any]], entitlement_name: str - ) -> Dict[str, Any]: - """Extract information from the entitlements array.""" - for entitlement in entitlements: - if entitlement.get("type") == entitlement_name: - return { - "entitled": "yes" if entitlement.get("entitled") else "no", - "auto_enabled": "yes" - if entitlement.get("obligations", {}).get( - "enableByDefault" - ) - else "no", - "affordances": entitlement.get("affordances", {}), - } - return {"entitled": "no", "auto_enabled": "no", "affordances": {}} - - def simulate_status( - self, token: str, show_beta: bool = False - ) -> Tuple[Dict[str, Any], int]: - """Get a status dictionary based on a token. - - Returns a tuple with the status dictionary and an integer value - 0 for - success, 1 for failure - """ - from uaclient.contract import ( - get_available_resources, - get_contract_information, - ) - from uaclient.entitlements import entitlement_factory - - ret = 0 - response = copy.deepcopy(DEFAULT_STATUS) - - try: - contract_information = get_contract_information(self, token) - except exceptions.ContractAPIError as e: - if hasattr(e, "code") and e.code == 401: - raise exceptions.UserFacingError( - msg=messages.ATTACH_INVALID_TOKEN.msg, - msg_code=messages.ATTACH_INVALID_TOKEN.name, - ) - raise e - - contract_info = contract_information.get("contractInfo", {}) - account_info = contract_information.get("accountInfo", {}) - - response.update( - { - "version": version.get_version(features=self.features), - "contract": { - "id": contract_info.get("id", ""), - "name": contract_info.get("name", ""), - "created_at": contract_info.get("createdAt", ""), - "products": contract_info.get("products", []), - }, - "account": { - "name": account_info.get("name", ""), - "id": account_info.get("id"), - "created_at": account_info.get("createdAt", ""), - "external_account_ids": account_info.get( - "externalAccountIDs", [] - ), - }, - "simulated": True, - } - ) - - now = datetime.now(timezone.utc) - if contract_info.get("effectiveTo"): - response["expires"] = contract_info.get("effectiveTo") - expiration_datetime = util.parse_rfc3339_date(response["expires"]) - delta = expiration_datetime - now - if delta.total_seconds() <= 0: - message = messages.ATTACH_FORBIDDEN_EXPIRED.format( - contract_id=response["contract"]["id"], - date=expiration_datetime.strftime(ATTACH_FAIL_DATE_FORMAT), - ) - event.error(error_msg=message.msg, error_code=message.name) - event.info("This token is not valid.\n" + message.msg + "\n") - ret = 1 - if contract_info.get("effectiveFrom"): - response["effective"] = contract_info.get("effectiveFrom") - effective_datetime = util.parse_rfc3339_date(response["effective"]) - delta = now - effective_datetime - if delta.total_seconds() <= 0: - message = messages.ATTACH_FORBIDDEN_NOT_YET.format( - contract_id=response["contract"]["id"], - date=effective_datetime.strftime(ATTACH_FAIL_DATE_FORMAT), - ) - event.error(error_msg=message.msg, error_code=message.name) - event.info("This token is not valid.\n" + message.msg + "\n") - ret = 1 - - status_cache = self.read_cache("status-cache") - if status_cache: - resources = status_cache.get("services") - else: - resources = get_available_resources(self) - - entitlements = contract_info.get("resourceEntitlements", []) - - inapplicable_resources = [ - resource["name"] - for resource in sorted(resources, key=lambda x: x["name"]) - if not resource["available"] - ] - - for resource in resources: - entitlement_name = resource.get("name", "") - try: - ent_cls = entitlement_factory(entitlement_name) - except exceptions.EntitlementNotFoundError: - continue - ent = ent_cls(self) - entitlement_information = self._get_entitlement_information( - entitlements, entitlement_name - ) - response["services"].append( - { - "name": resource.get("presentedAs", ent.name), - "description": ent.description, - "entitled": entitlement_information["entitled"], - "auto_enabled": entitlement_information["auto_enabled"], - "available": "yes" - if ent.name not in inapplicable_resources - else "no", - } - ) - response["services"].sort(key=lambda x: x.get("name", "")) - - support = self._get_entitlement_information(entitlements, "support") - if support["entitled"]: - supportLevel = support["affordances"].get("supportLevel") - if supportLevel: - response["contract"]["tech_support_level"] = supportLevel - - response.update(self._get_config_status()) - response = self._handle_beta_resources(show_beta, response) - - return response, ret - - def status(self, show_beta: bool = False) -> Dict[str, Any]: - """Return status as a dict, using a cache for non-root users - - When unattached, get available resources from the contract service - to report detailed availability of different resources for this - machine. - - Write the status-cache when called by root. - """ - if os.getuid() != 0: - response = cast("Dict[str, Any]", self.read_cache("status-cache")) - if not response: - response = self._unattached_status() - elif not self.is_attached: - response = self._unattached_status() - else: - response = self._attached_status() - - response.update(self._get_config_status()) - - if os.getuid() == 0: - self.write_cache("status-cache", response) - - # Try to remove fix reboot notices if not applicable - if not util.should_reboot(): - self.remove_notice( - "", - messages.ENABLE_REBOOT_REQUIRED_TMPL.format( - operation="fix operation" - ), - ) - - response = self._handle_beta_resources(show_beta, response) - - return response - - def help(self, name): - """Return help information from an uaclient service as a dict - - :param name: Name of the service for which to return help data. - - :raises: UserFacingError when no help is available. - """ - from uaclient.contract import get_available_resources - from uaclient.entitlements import entitlement_factory - - resources = get_available_resources(self) - help_resource = None - - # We are using an OrderedDict here to guarantee - # that if we need to print the result of this - # dict, the order of insertion will always be respected - response_dict = OrderedDict() - response_dict["name"] = name - - for resource in resources: - if resource["name"] == name or resource.get("presentedAs") == name: - try: - help_ent_cls = entitlement_factory(resource["name"]) - except exceptions.EntitlementNotFoundError: - continue - help_resource = resource - help_ent = help_ent_cls(self) - break - - if help_resource is None: - raise exceptions.UserFacingError( - "No help available for '{}'".format(name) - ) - - if self.is_attached: - service_status = self._attached_service_status(help_ent, {}) - status_msg = service_status["status"] - - response_dict["entitled"] = service_status["entitled"] - response_dict["status"] = status_msg - - if status_msg == "enabled" and help_ent_cls.is_beta: - response_dict["beta"] = True - - else: - if help_resource["available"]: - available = status.UserFacingAvailability.AVAILABLE.value - else: - available = status.UserFacingAvailability.UNAVAILABLE.value - - response_dict["available"] = available - - response_dict["help"] = help_ent.help_info - return response_dict - def process_config(self): for prop in ( "update_messaging_timer", @@ -1036,11 +651,31 @@ ).format(prop) raise exceptions.UserFacingError(error_msg) + if (self.global_apt_http_proxy or self.global_apt_https_proxy) and ( + self.ua_apt_http_proxy or self.ua_apt_https_proxy + ): + # Should we unset the config values? + raise exceptions.UserFacingError( + messages.ERROR_PROXY_CONFIGURATION + ) + + util.validate_proxy( + "http", + self.global_apt_http_proxy, + util.PROXY_VALIDATION_APT_HTTP_URL, + ) util.validate_proxy( - "http", self.apt_http_proxy, util.PROXY_VALIDATION_APT_HTTP_URL + "https", + self.global_apt_https_proxy, + util.PROXY_VALIDATION_APT_HTTPS_URL, ) util.validate_proxy( - "https", self.apt_https_proxy, util.PROXY_VALIDATION_APT_HTTPS_URL + "http", self.ua_apt_http_proxy, util.PROXY_VALIDATION_APT_HTTP_URL + ) + util.validate_proxy( + "https", + self.ua_apt_https_proxy, + util.PROXY_VALIDATION_APT_HTTPS_URL, ) util.validate_proxy( "http", self.http_proxy, util.PROXY_VALIDATION_SNAP_HTTP_URL @@ -1049,7 +684,18 @@ "https", self.https_proxy, util.PROXY_VALIDATION_SNAP_HTTPS_URL ) - apt.setup_apt_proxy(self.apt_http_proxy, self.apt_https_proxy) + if self.global_apt_http_proxy or self.global_apt_https_proxy: + apt.setup_apt_proxy( + self.global_apt_http_proxy, + self.global_apt_https_proxy, + apt.AptProxyScope.GLOBAL, + ) + elif self.ua_apt_http_proxy or self.ua_apt_https_proxy: + apt.setup_apt_proxy( + self.ua_apt_http_proxy, + self.ua_apt_https_proxy, + apt.AptProxyScope.UACLIENT, + ) services_with_proxies = [] if snap.is_installed(): @@ -1064,11 +710,12 @@ services_with_proxies.append("snap") from uaclient.entitlements import livepatch + from uaclient.entitlements.entitlement_status import ApplicationStatus livepatch_ent = livepatch.LivepatchEntitlement() livepatch_status, _ = livepatch_ent.application_status() - if livepatch_status == status.ApplicationStatus.ENABLED: + if livepatch_status == ApplicationStatus.ENABLED: livepatch.configure_livepatch_proxy( self.http_proxy, self.https_proxy ) @@ -1087,7 +734,7 @@ if len(services_with_proxies) > 0: services = ", ".join(services_with_proxies) - event.info( + print( messages.PROXY_DETECTED_BUT_NOT_CONFIGURED.format( services=services ) @@ -1108,19 +755,26 @@ "data_dir", "log_file", "timer_log_file", - "license_check_log_file", + "daemon_log_file", ): cfg_dict[attr] = getattr(self, attr) # Each UA_CONFIGURABLE_KEY needs to have a property on UAConfig # which reads the proper key value or returns a default cfg_dict["ua_config"] = { - key: getattr(self, key) for key in UA_CONFIGURABLE_KEYS + key: getattr(self, key, None) for key in UA_CONFIGURABLE_KEYS } content += yaml.dump(cfg_dict, default_flow_style=False) util.write_file(config_path, content) + def warn_about_invalid_keys(self): + if self.invalid_keys is not None: + for invalid_key in sorted(self.invalid_keys): + logging.warning( + "Ignoring invalid uaclient.conf key: %s", invalid_key + ) + def get_config_path() -> str: """Get config path to be used when loading config dict.""" @@ -1193,13 +847,12 @@ raise exceptions.UserFacingError( "Invalid url in config. {}: {}".format(key, cfg[key]) ) - # log about invalid keys before ignoring - for key in sorted(set(cfg.keys()).difference(VALID_UA_CONFIG_KEYS)): - logging.warning( - "Ignoring invalid uaclient.conf key: %s=%s", key, cfg.pop(key) - ) - return cfg + invalid_keys = set(cfg.keys()).difference(VALID_UA_CONFIG_KEYS) + for invalid_key in invalid_keys: + cfg.pop(invalid_key) + + return cfg, invalid_keys def apply_config_settings_override(override_key: str): @@ -1218,7 +871,7 @@ def wrapper(f): @wraps(f) def new_f(): - cfg = parse_config() + cfg, _ = parse_config() value_override = cfg.get("settings_overrides", {}).get( override_key, UNSET_SETTINGS_OVERRIDE_KEY ) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/conftest.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/conftest.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/conftest.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/conftest.py 2022-05-18 19:44:15.000000000 +0000 @@ -41,8 +41,7 @@ Specifically, bionic is the first Ubuntu release to contain a version of pytest new enough for the caplog fixture to be present. In xenial, the python3-pytest-catchlog package provides the same functionality (this is - the code that was later integrated in to pytest). For trusty, there is no - packaged alternative to this shim. + the code that was later integrated in to pytest). (It returns a function so that the requester can decide when to examine the logs; if it returned caplog.text directly, that would always be empty.) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/contract.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/contract.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/contract.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/contract.py 2022-05-18 19:44:15.000000000 +0000 @@ -11,7 +11,7 @@ ) from uaclient.config import UAConfig from uaclient.defaults import ATTACH_FAIL_DATE_FORMAT -from uaclient.status import UserFacingStatus +from uaclient.entitlements.entitlement_status import UserFacingStatus API_V1_CONTEXT_MACHINE_TOKEN = "/v1/context/machines/token" API_V1_TMPL_CONTEXT_MACHINE_TOKEN_RESOURCE = ( @@ -177,6 +177,36 @@ machine_token["activityInfo"] = response self.cfg.write_cache("machine-token", machine_token) + def get_updated_contract_info( + self, + machine_token: str, + contract_id: str, + machine_id: Optional[str] = None, + ) -> Dict[str, Any]: + """Get the updated machine token from the contract server. + + @param machine_token: The machine token needed to talk to + this contract service endpoint. + @param contract_id: Unique contract id provided by contract service + @param machine_id: Optional unique system machine id. When absent, + contents of /etc/machine-id will be used. + """ + if not machine_id: + machine_id = self._get_platform_data(machine_id).get( + "machineId", None + ) + headers = self.headers() + headers.update({"Authorization": "Bearer {}".format(machine_token)}) + url = API_V1_TMPL_CONTEXT_MACHINE_TOKEN_RESOURCE.format( + contract=contract_id, machine=machine_id + ) + response, headers = self.request_url( + url, method="GET", headers=headers + ) + if headers.get("expires"): + response["expires"] = headers["expires"] + return response + def _request_machine_token_update( self, machine_token: str, @@ -259,6 +289,7 @@ def process_entitlements_delta( + cfg: UAConfig, past_entitlements: Dict[str, Any], new_entitlements: Dict[str, Any], allow_enable: bool, @@ -267,6 +298,7 @@ """Iterate over all entitlements in new_entitlement and apply any delta found according to past_entitlements. + :param cfg: UAConfig instance :param past_entitlements: dict containing the last valid information regarding service entitlements. :param new_entitlements: dict containing the current information regarding @@ -282,8 +314,9 @@ for name, new_entitlement in sorted(new_entitlements.items()): try: deltas, service_enabled = process_entitlement_delta( - past_entitlements.get(name, {}), - new_entitlement, + cfg=cfg, + orig_access=past_entitlements.get(name, {}), + new_access=new_entitlement, allow_enable=allow_enable, series_overrides=series_overrides, ) @@ -323,6 +356,7 @@ def process_entitlement_delta( + cfg: UAConfig, orig_access: Dict[str, Any], new_access: Dict[str, Any], allow_enable: bool = False, @@ -330,6 +364,7 @@ ) -> Tuple[Dict, bool]: """Process a entitlement access dictionary deltas if they exist. + :param cfg: UAConfig instance :param orig_access: Dict with original entitlement access details before contract refresh deltas :param new_access: Dict with updated entitlement access details after @@ -361,14 +396,14 @@ ) raise exceptions.UserFacingError(msg=msg.msg, msg_code=msg.name) try: - ent_cls = entitlement_factory(name) + ent_cls = entitlement_factory(cfg=cfg, name=name) except exceptions.EntitlementNotFoundError as exc: logging.debug( 'Skipping entitlement deltas for "%s". No such class', name ) raise exc - entitlement = ent_cls(assume_yes=allow_enable) + entitlement = ent_cls(cfg=cfg, assume_yes=allow_enable) ret = entitlement.process_contract_deltas( orig_access, deltas, allow_enable=allow_enable ) @@ -376,7 +411,7 @@ def _create_attach_forbidden_message( - e: exceptions.ContractAPIError + e: exceptions.ContractAPIError, ) -> messages.NamedMessage: msg = messages.ATTACH_EXPIRED_TOKEN if ( @@ -391,14 +426,24 @@ if reason == "no-longer-effective": date = info["time"].strftime(ATTACH_FAIL_DATE_FORMAT) + additional_info = { + "contract_expiry_date": info["time"].strftime("%m-%d-%Y"), + "contract_id": contract_id, + } reason_msg = messages.ATTACH_FORBIDDEN_EXPIRED.format( contract_id=contract_id, date=date ) + reason_msg.additional_info = additional_info elif reason == "not-effective-yet": date = info["time"].strftime(ATTACH_FAIL_DATE_FORMAT) + additional_info = { + "contract_effective_date": info["time"].strftime("%m-%d-%Y"), + "contract_id": contract_id, + } reason_msg = messages.ATTACH_FORBIDDEN_NOT_YET.format( contract_id=contract_id, date=date ) + reason_msg.additional_info = additional_info elif reason == "never-effective": reason_msg = messages.ATTACH_FORBIDDEN_NEVER.format( contract_id=contract_id @@ -407,6 +452,7 @@ if reason_msg: msg = messages.ATTACH_FORBIDDEN.format(reason=reason_msg.msg) msg.name = reason_msg.name + msg.additional_info = reason_msg.additional_info return msg @@ -447,7 +493,9 @@ elif e.code == 403: msg = _create_attach_forbidden_message(e) raise exceptions.UserFacingError( - msg=msg.msg, msg_code=msg.name + msg=msg.msg, + msg_code=msg.name, + additional_info=msg.additional_info, ) raise e with util.disable_log_to_console(): @@ -464,7 +512,7 @@ ) process_entitlements_delta( - orig_entitlements, cfg.entitlements, allow_enable + cfg, orig_entitlements, cfg.entitlements, allow_enable ) @@ -479,3 +527,38 @@ """Query contract information for a specific token""" client = UAContractClient(cfg) return client.request_contract_information(token) + + +def is_contract_changed(cfg: UAConfig) -> bool: + orig_token = cfg.machine_token + orig_entitlements = cfg.entitlements + machine_token = orig_token.get("machineToken", "") + contract_id = ( + orig_token.get("machineTokenInfo", {}) + .get("contractInfo", {}) + .get("id", "") + ) + contract_client = UAContractClient(cfg) + resp = contract_client.get_updated_contract_info( + machine_token, contract_id + ) + resp_expiry = ( + resp.get("machineTokenInfo", {}) + .get("contractInfo", {}) + .get("effectiveTo", None) + ) + new_expiry = ( + util.parse_rfc3339_date(resp_expiry) + if resp_expiry + else cfg.contract_expiry_datetime + ) + if cfg.contract_expiry_datetime != new_expiry: + return True + curr_entitlements = cfg.get_entitlements_from_token(resp) + for name, new_entitlement in sorted(curr_entitlements.items()): + deltas = util.get_dict_deltas( + orig_entitlements.get(name, {}), new_entitlement + ) + if deltas: + return True + return False diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/daemon.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/daemon.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/daemon.py 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/daemon.py 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,128 @@ +import logging +import time + +from uaclient import actions, exceptions, lock, messages, util +from uaclient.clouds import AutoAttachCloudInstance +from uaclient.clouds.gcp import UAAutoAttachGCPInstance +from uaclient.clouds.identity import cloud_instance_factory +from uaclient.config import UAConfig + +LOG = logging.getLogger("ua.daemon") + + +def start(): + try: + util.subp(["systemctl", "start", "ubuntu-advantage.service"]) + except exceptions.ProcessExecutionError as e: + LOG.warning(e) + + +def stop(): + try: + util.subp(["systemctl", "stop", "ubuntu-advantage.service"]) + except exceptions.ProcessExecutionError as e: + LOG.warning(e) + + +def attempt_auto_attach(cfg: UAConfig, cloud: AutoAttachCloudInstance): + try: + with lock.SpinLock( + cfg=cfg, lock_holder="ua.daemon.attempt_auto_attach" + ): + actions.auto_attach(cfg, cloud) + except exceptions.LockHeldError as e: + LOG.error(e) + cfg.add_notice( + "", + messages.NOTICE_DAEMON_AUTO_ATTACH_LOCK_HELD.format( + operation=e.lock_holder + ), + ) + LOG.debug("Failed to auto attach") + return + except Exception as e: + LOG.exception(e) + cfg.add_notice("", messages.NOTICE_DAEMON_AUTO_ATTACH_FAILED) + lock.clear_lock_file_if_present() + LOG.debug("Failed to auto attach") + return + LOG.debug("Successful auto attach") + + +def poll_for_pro_license(cfg: UAConfig): + if util.is_config_value_true( + config=cfg.cfg, path_to_value="features.disable_auto_attach" + ): + LOG.debug("Configured to not auto attach, shutting down") + return + if cfg.is_attached: + LOG.debug("Already attached, shutting down") + return + if not util.is_current_series_lts(): + LOG.debug("Not on LTS, shutting down") + return + + try: + cloud = cloud_instance_factory() + except exceptions.CloudFactoryError: + LOG.debug("Not on cloud, shutting down") + return + + if not isinstance(cloud, UAAutoAttachGCPInstance): + LOG.debug("Not on gcp, shutting down") + return + + if not cloud.should_poll_for_pro_license(): + LOG.debug("Not on supported instance, shutting down") + return + + try: + pro_license_present = cloud.is_pro_license_present( + wait_for_change=False + ) + except exceptions.CancelProLicensePolling: + LOG.debug("Cancelling polling") + return + except exceptions.DelayProLicensePolling: + # Continue to polling loop anyway and handle error there if it occurs + # again + pass + else: + if pro_license_present: + attempt_auto_attach(cfg, cloud) + return + + if not cfg.poll_for_pro_license: + LOG.debug("Configured to not poll for pro license, shutting down") + return + + while True: + try: + start = time.time() + pro_license_present = cloud.is_pro_license_present( + wait_for_change=True + ) + end = time.time() + except exceptions.CancelProLicensePolling: + LOG.debug("Cancelling polling") + return + except exceptions.DelayProLicensePolling: + time.sleep(cfg.polling_error_retry_delay) + continue + else: + if cfg.is_attached: + # This could have changed during the long poll or sleep + LOG.debug("Already attached, shutting down") + return + if pro_license_present: + attempt_auto_attach(cfg, cloud) + return + if end - start < 10: + LOG.debug( + "wait_for_change returned quickly and no pro license" + " present. Waiting {} seconds before polling again".format( + cfg.polling_error_retry_delay + ) + ) + time.sleep(cfg.polling_error_retry_delay) + continue diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/defaults.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/defaults.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/defaults.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/defaults.py 2022-05-18 19:44:15.000000000 +0000 @@ -24,6 +24,9 @@ CONTRACT_EXPIRY_GRACE_PERIOD_DAYS = 14 CONTRACT_EXPIRY_PENDING_DAYS = 20 ATTACH_FAIL_DATE_FORMAT = "%B %d, %Y" +DEFAULT_LOG_FORMAT = ( + "%(asctime)s - %(filename)s:(%(lineno)d) [%(levelname)s]: %(message)s" +) CONFIG_DEFAULTS = { "contract_url": BASE_CONTRACT_URL, @@ -32,14 +35,14 @@ "log_level": "INFO", "log_file": "/var/log/ubuntu-advantage.log", "timer_log_file": "/var/log/ubuntu-advantage-timer.log", - "license_check_log_file": "/var/log/ubuntu-advantage-license-check.log", + "daemon_log_file": "/var/log/ubuntu-advantage-daemon.log", } CONFIG_FIELD_ENVVAR_ALLOWLIST = [ "ua_data_dir", "ua_log_file", "ua_timer_log_file", - "ua_license_check_log_file", + "ua_daemon_log_file", "ua_log_level", "ua_security_url", ] diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/base.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/base.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/base.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/base.py 2022-05-18 19:44:15.000000000 +0000 @@ -8,18 +8,11 @@ import yaml -from uaclient import ( - config, - contract, - event_logger, - exceptions, - messages, - status, - util, -) +from uaclient import config, contract, event_logger, exceptions, messages, util from uaclient.defaults import DEFAULT_HELP_FILE -from uaclient.status import ( +from uaclient.entitlements.entitlement_status import ( ApplicabilityStatus, + ApplicationStatus, CanDisableFailure, CanDisableFailureReason, CanEnableFailure, @@ -162,7 +155,7 @@ # Any custom messages to emit to the console or callables which are # handled at pre_enable, pre_disable, pre_install or post_enable stages @property - def messaging(self,) -> MessagingOperationsDict: + def messaging(self) -> MessagingOperationsDict: return {} def __init__( @@ -274,7 +267,7 @@ """ application_status, _ = self.application_status() - if application_status == status.ApplicationStatus.DISABLED: + if application_status == ApplicationStatus.DISABLED: return ( False, CanDisableFailure( @@ -319,7 +312,7 @@ ) application_status, _ = self.application_status() - if application_status != status.ApplicationStatus.DISABLED: + if application_status != ApplicationStatus.DISABLED: return ( False, CanEnableFailure( @@ -332,7 +325,7 @@ return (False, CanEnableFailure(CanEnableFailureReason.IS_BETA)) applicability_status, details = self.applicability_status() - if applicability_status == status.ApplicabilityStatus.INAPPLICABLE: + if applicability_status == ApplicabilityStatus.INAPPLICABLE: return ( False, CanEnableFailure( @@ -368,11 +361,11 @@ for service in services: try: - ent_cls = entitlement_factory(service) + ent_cls = entitlement_factory(cfg=self.cfg, name=service) except EntitlementNotFoundError: continue ent_status, _ = ent_cls(self.cfg).application_status() - if ent_status == status.ApplicationStatus.ENABLED: + if ent_status == ApplicationStatus.ENABLED: return True return False @@ -401,9 +394,11 @@ for required_service in self.required_services: try: - ent_cls = entitlement_factory(required_service) + ent_cls = entitlement_factory( + cfg=self.cfg, name=required_service + ) ent_status, _ = ent_cls(self.cfg).application_status() - if ent_status != status.ApplicationStatus.ENABLED: + if ent_status != ApplicationStatus.ENABLED: return False except exceptions.EntitlementNotFoundError: pass @@ -417,7 +412,7 @@ ret = [] for service in self.incompatible_services: ent_status, _ = service.entitlement(self.cfg).application_status() - if ent_status == status.ApplicationStatus.ENABLED: + if ent_status == ApplicationStatus.ENABLED: ret.append(service) return ret @@ -433,7 +428,7 @@ return len(self.blocking_incompatible_services()) > 0 def handle_incompatible_services( - self + self, ) -> Tuple[bool, Optional[messages.NamedMessage]]: """ Prompt user when incompatible services are found during enable. @@ -486,7 +481,7 @@ return True, None def _enable_required_services( - self + self, ) -> Tuple[bool, Optional[messages.NamedMessage]]: """ Prompt user when required services are found during enable. @@ -499,7 +494,9 @@ for required_service in self.required_services: try: - ent_cls = entitlement_factory(required_service) + ent_cls = entitlement_factory( + cfg=self.cfg, name=required_service + ) except exceptions.EntitlementNotFoundError: msg = messages.REQUIRED_SERVICE_NOT_FOUND.format( service=required_service @@ -509,8 +506,7 @@ ent = ent_cls(self.cfg, allow_beta=True) is_service_disabled = ( - ent.application_status()[0] - == status.ApplicationStatus.DISABLED + ent.application_status()[0] == ApplicationStatus.DISABLED ) if is_service_disabled: @@ -544,7 +540,7 @@ return True, None def applicability_status( - self + self, ) -> Tuple[ApplicabilityStatus, Optional[messages.NamedMessage]]: """Check all contract affordances to vet current platform @@ -568,8 +564,11 @@ return ApplicabilityStatus.INAPPLICABLE, error_message affordances = entitlement_cfg["entitlement"].get("affordances", {}) platform = util.get_platform_info() - affordance_arches = affordances.get("architectures", []) - if affordance_arches and platform["arch"] not in affordance_arches: + affordance_arches = affordances.get("architectures", None) + if ( + affordance_arches is not None + and platform["arch"] not in affordance_arches + ): return ( ApplicabilityStatus.INAPPLICABLE, messages.INAPPLICABLE_ARCH.format( @@ -578,8 +577,11 @@ supported_arches=", ".join(affordance_arches), ), ) - affordance_series = affordances.get("series", []) - if affordance_series and platform["series"] not in affordance_series: + affordance_series = affordances.get("series", None) + if ( + affordance_series is not None + and platform["series"] not in affordance_series + ): return ( ApplicabilityStatus.INAPPLICABLE, messages.INAPPLICABLE_SERIES.format( @@ -587,10 +589,10 @@ ), ) kernel = platform["kernel"] - affordance_kernels = affordances.get("kernelFlavors", []) + affordance_kernels = affordances.get("kernelFlavors", None) affordance_min_kernel = affordances.get("minKernelVersion") match = re.match(RE_KERNEL_UNAME, kernel) - if affordance_kernels: + if affordance_kernels is not None: if not match or match.group("flavor") not in affordance_kernels: return ( ApplicabilityStatus.INAPPLICABLE, @@ -661,7 +663,9 @@ for dependent_service in self.dependent_services: try: - ent_cls = entitlement_factory(dependent_service) + ent_cls = entitlement_factory( + cfg=self.cfg, name=dependent_service + ) except exceptions.EntitlementNotFoundError: msg = messages.DEPENDENT_SERVICE_NOT_FOUND.format( service=dependent_service @@ -672,7 +676,7 @@ ent = ent_cls(cfg=self.cfg, assume_yes=True) is_service_enabled = ( - ent.application_status()[0] == status.ApplicationStatus.ENABLED + ent.application_status()[0] == ApplicationStatus.ENABLED ) if is_service_enabled: @@ -797,12 +801,12 @@ return False return True - def _check_application_status_on_cache(self) -> status.ApplicationStatus: + def _check_application_status_on_cache(self) -> ApplicationStatus: """Check on the state of application on the status cache.""" status_cache = self.cfg.read_cache("status-cache") if status_cache is None: - return status.ApplicationStatus.DISABLED + return ApplicationStatus.DISABLED services_status_list = status_cache.get("services", []) @@ -811,11 +815,11 @@ service_status = service.get("status") if service_status == "enabled": - return status.ApplicationStatus.ENABLED + return ApplicationStatus.ENABLED else: - return status.ApplicationStatus.DISABLED + return ApplicationStatus.DISABLED - return status.ApplicationStatus.DISABLED + return ApplicationStatus.DISABLED def process_contract_deltas( self, @@ -859,7 +863,7 @@ else: application_status, _ = self.application_status() - if application_status != status.ApplicationStatus.DISABLED: + if application_status != ApplicationStatus.DISABLED: if self.can_disable(): self.disable() logging.info( @@ -907,7 +911,7 @@ return False def user_facing_status( - self + self, ) -> Tuple[UserFacingStatus, Optional[messages.NamedMessage]]: """Return (user-facing status, details) for entitlement""" applicability, details = self.applicability_status() @@ -927,15 +931,15 @@ application_status, explanation = self.application_status() user_facing_status = { - status.ApplicationStatus.ENABLED: UserFacingStatus.ACTIVE, - status.ApplicationStatus.DISABLED: UserFacingStatus.INACTIVE, + ApplicationStatus.ENABLED: UserFacingStatus.ACTIVE, + ApplicationStatus.DISABLED: UserFacingStatus.INACTIVE, }[application_status] return user_facing_status, explanation @abc.abstractmethod def application_status( - self - ) -> Tuple[status.ApplicationStatus, Optional[messages.NamedMessage]]: + self, + ) -> Tuple[ApplicationStatus, Optional[messages.NamedMessage]]: """ The current status of application of this entitlement diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/cis.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/cis.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/cis.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/cis.py 2022-05-18 19:44:15.000000000 +0000 @@ -16,7 +16,7 @@ apt_noninteractive = True @property - def messaging(self,) -> MessagingOperationsDict: + def messaging(self) -> MessagingOperationsDict: if self._called_name == "usg": return { "post_enable": [ diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/entitlement_status.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/entitlement_status.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/entitlement_status.py 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/entitlement_status.py 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,127 @@ +import enum +from typing import Optional + +from uaclient import messages + + +@enum.unique +class ApplicationStatus(enum.Enum): + """ + An enum to represent the current application status of an entitlement + """ + + ENABLED = object() + DISABLED = object() + + +@enum.unique +class ContractStatus(enum.Enum): + """ + An enum to represent whether a user is entitled to an entitlement + + (The value of each member is the string that will be used in status + output.) + """ + + ENTITLED = "yes" + UNENTITLED = "no" + + +@enum.unique +class ApplicabilityStatus(enum.Enum): + """ + An enum to represent whether an entitlement could apply to this machine + """ + + APPLICABLE = object() + INAPPLICABLE = object() + + +@enum.unique +class UserFacingAvailability(enum.Enum): + """ + An enum representing whether a service could be available for a machine. + + 'Availability' means whether a service is available to machines with this + architecture, series and kernel. Whether a contract is entitled to use + the specific service is determined by the contract level. + + This enum should only be used in display code, it should not be used in + business logic. + """ + + AVAILABLE = "yes" + UNAVAILABLE = "no" + + +@enum.unique +class UserFacingConfigStatus(enum.Enum): + """ + An enum representing the user-visible config status of UA system. + + This enum will be used in display code and will be written to status.json + """ + + INACTIVE = "inactive" # No UA config commands/daemons + ACTIVE = "active" # UA command is running + REBOOTREQUIRED = "reboot-required" # System Reboot required + + +@enum.unique +class UserFacingStatus(enum.Enum): + """ + An enum representing the states we will display in status output. + + This enum should only be used in display code, it should not be used in + business logic. + """ + + ACTIVE = "enabled" + INACTIVE = "disabled" + INAPPLICABLE = "n/a" + UNAVAILABLE = "—" + + +@enum.unique +class CanEnableFailureReason(enum.Enum): + """ + An enum representing the reasons an entitlement can't be enabled. + """ + + NOT_ENTITLED = object() + ALREADY_ENABLED = object() + INAPPLICABLE = object() + IS_BETA = object() + INCOMPATIBLE_SERVICE = object() + INACTIVE_REQUIRED_SERVICES = object() + + +class CanEnableFailure: + def __init__( + self, + reason: CanEnableFailureReason, + message: Optional[messages.NamedMessage] = None, + ) -> None: + self.reason = reason + self.message = message + + +@enum.unique +class CanDisableFailureReason(enum.Enum): + """ + An enum representing the reasons an entitlement can't be disabled. + """ + + ALREADY_DISABLED = object() + ACTIVE_DEPENDENT_SERVICES = object() + NOT_FOUND_DEPENDENT_SERVICE = object() + + +class CanDisableFailure: + def __init__( + self, + reason: CanDisableFailureReason, + message: Optional[messages.NamedMessage] = None, + ) -> None: + self.reason = reason + self.message = message diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/esm.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/esm.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/esm.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/esm.py 2022-05-18 19:44:15.000000000 +0000 @@ -2,8 +2,8 @@ from uaclient import util from uaclient.entitlements import repo +from uaclient.entitlements.entitlement_status import CanDisableFailure from uaclient.jobs.update_messaging import update_apt_and_motd_messages -from uaclient.status import CanDisableFailure class ESMBaseEntitlement(repo.RepoEntitlement): @@ -35,10 +35,8 @@ @property def repo_pin_priority(self) -> Optional[str]: - """All LTS with the exception of Trusty should pin esm-apps.""" + """All LTS should pin esm-apps.""" series = util.get_platform_info()["series"] - if series == "trusty": - return None if self.valid_service: if util.is_lts(series): @@ -47,10 +45,8 @@ @property def disable_apt_auth_only(self) -> bool: - """All LTSexcept Trusty remove APT auth files upon disable""" + """All LTS remove APT auth files upon disable""" series = util.get_platform_info()["series"] - if series == "trusty": - return False if self.valid_service: return util.is_lts(series) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/fips.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/fips.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/fips.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/fips.py 2022-05-18 19:44:15.000000000 +0000 @@ -4,10 +4,11 @@ from itertools import groupby from typing import List, Optional, Tuple # noqa: F401 -from uaclient import apt, event_logger, exceptions, messages, status, util +from uaclient import apt, event_logger, exceptions, messages, util from uaclient.clouds.identity import NoCloudTypeReason, get_cloud_type from uaclient.entitlements import repo from uaclient.entitlements.base import IncompatibleService +from uaclient.entitlements.entitlement_status import ApplicationStatus from uaclient.types import ( # noqa: F401 MessagingOperations, MessagingOperationsDict, @@ -205,7 +206,7 @@ ) -> bool: """Return False when FIPS is allowed on this cloud and series. - On Xenial Azure and GCP there will be no cloud-optimized kernel so + On Xenial GCP there will be no cloud-optimized kernel so block default ubuntu-fips enable. This can be overridden in config with features.allow_xenial_fips_on_cloud. @@ -216,9 +217,6 @@ :return: False when this cloud, series or config override allows FIPS. """ - if cloud_id not in ("azure", "gce"): - return True - if cloud_id == "gce": if util.is_config_value_true( config=self.cfg.cfg, @@ -232,16 +230,6 @@ return bool("ubuntu-gcp-fips" in super().packages) - # Azure FIPS cloud support - if series == "xenial": - if util.is_config_value_true( - config=self.cfg.cfg, - path_to_value="features.allow_xenial_fips_on_cloud", - ): - return True - else: - return False - return True @property @@ -312,8 +300,8 @@ return self._replace_metapackage_on_cloud_instance(packages) def application_status( - self - ) -> Tuple[status.ApplicationStatus, Optional[messages.NamedMessage]]: + self, + ) -> Tuple[ApplicationStatus, Optional[messages.NamedMessage]]: super_status, super_msg = super().application_status() if util.is_container() and not util.should_reboot(): @@ -326,18 +314,21 @@ self.cfg.remove_notice( "", messages.FIPS_SYSTEM_REBOOT_REQUIRED.msg ) + self.cfg.remove_notice("", messages.FIPS_REBOOT_REQUIRED_MSG) if util.load_file(self.FIPS_PROC_FILE).strip() == "1": self.cfg.remove_notice( - "", status.NOTICE_FIPS_MANUAL_DISABLE_URL + "", messages.NOTICE_FIPS_MANUAL_DISABLE_URL ) return super_status, super_msg else: self.cfg.remove_notice( "", messages.FIPS_DISABLE_REBOOT_REQUIRED ) - self.cfg.add_notice("", status.NOTICE_FIPS_MANUAL_DISABLE_URL) + self.cfg.add_notice( + "", messages.NOTICE_FIPS_MANUAL_DISABLE_URL + ) return ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, messages.FIPS_PROC_FILE_ERROR.format( file_name=self.FIPS_PROC_FILE ), @@ -345,10 +336,10 @@ else: self.cfg.remove_notice("", messages.FIPS_DISABLE_REBOOT_REQUIRED) - if super_status != status.ApplicationStatus.ENABLED: + if super_status != ApplicationStatus.ENABLED: return super_status, super_msg return ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, messages.FIPS_REBOOT_REQUIRED, ) @@ -380,8 +371,9 @@ def _perform_enable(self, silent: bool = False) -> bool: if super()._perform_enable(silent=silent): self.cfg.remove_notice( - "", status.NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD + "", messages.NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD ) + self.cfg.remove_notice("", messages.FIPS_REBOOT_REQUIRED_MSG) return True return False @@ -437,7 +429,7 @@ static_affordances = super().static_affordances fips_update = FIPSUpdatesEntitlement(self.cfg) - enabled_status = status.ApplicationStatus.ENABLED + enabled_status = ApplicationStatus.ENABLED is_fips_update_enabled = bool( fips_update.application_status()[0] == enabled_status ) @@ -467,15 +459,17 @@ ) @property - def messaging(self,) -> MessagingOperationsDict: + def messaging(self) -> MessagingOperationsDict: post_enable = None # type: Optional[MessagingOperations] if util.is_container(): - pre_enable_prompt = status.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( - title=self.title + pre_enable_prompt = ( + messages.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( + title=self.title + ) ) post_enable = [messages.FIPS_RUN_APT_UPGRADE] else: - pre_enable_prompt = status.PROMPT_FIPS_PRE_ENABLE + pre_enable_prompt = messages.PROMPT_FIPS_PRE_ENABLE return { "pre_enable": [ @@ -489,7 +483,7 @@ ( util.prompt_for_confirmation, { - "msg": status.PROMPT_FIPS_PRE_DISABLE, + "msg": messages.PROMPT_FIPS_PRE_DISABLE, "assume_yes": self.assume_yes, }, ) @@ -532,15 +526,17 @@ ) @property - def messaging(self,) -> MessagingOperationsDict: + def messaging(self) -> MessagingOperationsDict: post_enable = None # type: Optional[MessagingOperations] if util.is_container(): - pre_enable_prompt = status.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( - title=self.title + pre_enable_prompt = ( + messages.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( + title=self.title + ) ) post_enable = [messages.FIPS_RUN_APT_UPGRADE] else: - pre_enable_prompt = status.PROMPT_FIPS_UPDATES_PRE_ENABLE + pre_enable_prompt = messages.PROMPT_FIPS_UPDATES_PRE_ENABLE return { "pre_enable": [ @@ -554,7 +550,7 @@ ( util.prompt_for_confirmation, { - "msg": status.PROMPT_FIPS_PRE_DISABLE, + "msg": messages.PROMPT_FIPS_PRE_DISABLE, "assume_yes": self.assume_yes, }, ) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/__init__.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/__init__.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/__init__.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/__init__.py 2022-05-18 19:44:15.000000000 +0000 @@ -26,11 +26,12 @@ ] # type: List[Type[UAEntitlement]] -def entitlement_factory(name: str): +def entitlement_factory(cfg: UAConfig, name: str): """Returns a UAEntitlement class based on the provided name. The return type is Optional[Type[UAEntitlement]]. It cannot be explicit because of the Python version on Xenial (3.5.2). + :param cfg: UAConfig instance :param name: The name of the entitlement to return :param not_found_okay: If True and no entitlement with the given name is found, then returns None. @@ -38,21 +39,21 @@ entitlement with the given name is found, then raises this error. """ for entitlement in ENTITLEMENT_CLASSES: - if name in entitlement().valid_names: + if name in entitlement(cfg=cfg).valid_names: return entitlement raise EntitlementNotFoundError() def valid_services( - allow_beta: bool = False, all_names: bool = False + cfg: UAConfig, allow_beta: bool = False, all_names: bool = False ) -> List[str]: """Return a list of valid (non-beta) services. - @param allow_beta: if we should allow beta services to be marked as valid - @param all_names: if we should return all the names for a service instead + :param cfg: UAConfig instance + :param allow_beta: if we should allow beta services to be marked as valid + :param all_names: if we should return all the names for a service instead of just the presentation_name """ - cfg = UAConfig() allow_beta_cfg = is_config_value_true(cfg.cfg, "features.allow_beta") allow_beta |= allow_beta_cfg @@ -67,10 +68,13 @@ if all_names: names = [] for entitlement in entitlements: - names.extend(entitlement().valid_names) + names.extend(entitlement(cfg=cfg).valid_names) return sorted(names) return sorted( - [entitlement().presentation_name for entitlement in entitlements] + [ + entitlement(cfg=cfg).presentation_name + for entitlement in entitlements + ] ) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/livepatch.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/livepatch.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/livepatch.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/livepatch.py 2022-05-18 19:44:15.000000000 +0000 @@ -2,17 +2,9 @@ import re from typing import Any, Dict, List, Optional, Tuple -from uaclient import ( - apt, - event_logger, - exceptions, - messages, - snap, - status, - util, -) +from uaclient import apt, event_logger, exceptions, messages, snap, util from uaclient.entitlements.base import IncompatibleService, UAEntitlement -from uaclient.status import ApplicationStatus +from uaclient.entitlements.entitlement_status import ApplicationStatus from uaclient.types import StaticAffordance LIVEPATCH_RETRIES = [0.5, 1.0] @@ -239,7 +231,7 @@ ) livepatch_token = self.cfg.machine_token["machineToken"] application_status, _details = self.application_status() - if application_status != status.ApplicationStatus.DISABLED: + if application_status != ApplicationStatus.DISABLED: logging.info( "Disabling %s prior to re-attach with new token", self.title, @@ -277,7 +269,7 @@ return True def application_status( - self + self, ) -> Tuple[ApplicationStatus, Optional[messages.NamedMessage]]: status = (ApplicationStatus.ENABLED, None) @@ -328,7 +320,7 @@ return enable_success application_status, _ = self.application_status() - if application_status == status.ApplicationStatus.DISABLED: + if application_status == ApplicationStatus.DISABLED: return False # only operate on changed directives when ACTIVE delta_directives = delta_entitlement.get("directives", {}) supported_deltas = set(["caCerts", "remoteServer"]) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/realtime.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/realtime.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/realtime.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/realtime.py 2022-05-18 19:44:15.000000000 +0000 @@ -60,7 +60,9 @@ ) @property - def messaging(self,) -> MessagingOperationsDict: + def messaging( + self, + ) -> MessagingOperationsDict: return { "pre_enable": [ ( diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/repo.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/repo.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/repo.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/repo.py 2022-05-18 19:44:15.000000000 +0000 @@ -5,17 +5,9 @@ import re from typing import Any, Dict, List, Optional, Tuple, Union # noqa: F401 -from uaclient import ( - apt, - contract, - event_logger, - exceptions, - messages, - status, - util, -) +from uaclient import apt, contract, event_logger, exceptions, messages, util from uaclient.entitlements import base -from uaclient.status import ApplicationStatus +from uaclient.entitlements.entitlement_status import ApplicationStatus APT_DISABLED_PIN = "-32768" @@ -98,7 +90,7 @@ self.remove_apt_config(silent=silent) def application_status( - self + self, ) -> Tuple[ApplicationStatus, Optional[messages.NamedMessage]]: entitlement_cfg = self.cfg.entitlements.get(self.name, {}) directives = entitlement_cfg.get("entitlement", {}).get( @@ -111,8 +103,8 @@ messages.NO_APT_URL_FOR_SERVICE.format(title=self.title), ) protocol, repo_path = repo_url.split("://") - policy = apt.run_apt_command( - ["apt-cache", "policy"], messages.APT_POLICY_FAILED.msg + policy = apt.run_apt_cache_policy_command( + error_msg=messages.APT_POLICY_FAILED.msg ) match = re.search( r"(?P(-)?\d+) {}/ubuntu".format(repo_url), policy @@ -185,7 +177,7 @@ else: application_status, _ = self.application_status() - if application_status == status.ApplicationStatus.DISABLED: + if application_status == ApplicationStatus.DISABLED: return False if not self._check_apt_url_is_applied(delta_apt_url): @@ -272,15 +264,37 @@ :raise UserFacingError: on failure to setup any aspect of this apt configuration """ - http_proxy = util.validate_proxy( - "http", self.cfg.apt_http_proxy, util.PROXY_VALIDATION_APT_HTTP_URL - ) - https_proxy = util.validate_proxy( - "https", - self.cfg.apt_https_proxy, - util.PROXY_VALIDATION_APT_HTTPS_URL, + http_proxy = None # type: Optional[str] + https_proxy = None # type: Optional[str] + scope = None # type: Optional[apt.AptProxyScope] + if self.cfg.global_apt_http_proxy or self.cfg.global_apt_https_proxy: + http_proxy = util.validate_proxy( + "http", + self.cfg.global_apt_http_proxy, + util.PROXY_VALIDATION_APT_HTTP_URL, + ) + https_proxy = util.validate_proxy( + "https", + self.cfg.global_apt_https_proxy, + util.PROXY_VALIDATION_APT_HTTPS_URL, + ) + scope = apt.AptProxyScope.GLOBAL + elif self.cfg.ua_apt_http_proxy or self.cfg.ua_apt_https_proxy: + http_proxy = util.validate_proxy( + "http", + self.cfg.ua_apt_http_proxy, + util.PROXY_VALIDATION_APT_HTTP_URL, + ) + https_proxy = util.validate_proxy( + "https", + self.cfg.ua_apt_https_proxy, + util.PROXY_VALIDATION_APT_HTTPS_URL, + ) + scope = apt.AptProxyScope.UACLIENT + + apt.setup_apt_proxy( + http_proxy=http_proxy, https_proxy=https_proxy, proxy_scope=scope ) - apt.setup_apt_proxy(http_proxy=http_proxy, https_proxy=https_proxy) repo_filename = self.repo_list_file_tmpl.format(name=self.name) resource_cfg = self.cfg.entitlements.get(self.name) directives = resource_cfg["entitlement"].get("directives", {}) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/conftest.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/conftest.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/conftest.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/conftest.py 2022-05-18 19:44:15.000000000 +0000 @@ -52,7 +52,7 @@ additional_packages: List[str] = None ) -> Dict[str, Any]: if affordances is None: - affordances = {"series": []} # Will match all series + affordances = {} if suites is None: suites = ["xenial"] if obligations is None: diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_base.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_base.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_base.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_base.py 2022-05-18 19:44:15.000000000 +0000 @@ -5,8 +5,17 @@ import mock import pytest -from uaclient import config, messages, status, util +from uaclient import config, messages, util from uaclient.entitlements import EntitlementNotFoundError, base +from uaclient.entitlements.entitlement_status import ( + ApplicabilityStatus, + ApplicationStatus, + CanDisableFailure, + CanDisableFailureReason, + CanEnableFailure, + CanEnableFailureReason, + UserFacingStatus, +) from uaclient.status import ContractStatus @@ -38,7 +47,7 @@ def _perform_disable(self, **kwargs): self._application_status = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "disable() called", ) return self._disable @@ -64,8 +73,8 @@ def factory( *, entitled: bool, - applicability_status: Tuple[status.ApplicabilityStatus, str] = None, - application_status: Tuple[status.ApplicationStatus, str] = None, + applicability_status: Tuple[ApplicabilityStatus, str] = None, + application_status: Tuple[ApplicationStatus, str] = None, feature_overrides: Optional[Dict[str, str]] = None, allow_beta: bool = False, enable: bool = False, @@ -133,7 +142,7 @@ """When status is INACTIVE, can_disable returns False.""" entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.DISABLED, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) ret, fail = entitlement.can_disable() @@ -152,24 +161,21 @@ """When status is INACTIVE, can_disable returns False.""" entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.ENABLED, ""), + application_status=(ApplicationStatus.ENABLED, ""), dependent_services=("test",), ) m_ent_cls = mock.Mock() m_ent_obj = m_ent_cls.return_value m_ent_obj.application_status.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, None, ) m_ent_factory.return_value = m_ent_cls ret, fail = entitlement.can_disable() assert not ret - assert ( - fail.reason - == status.CanDisableFailureReason.ACTIVE_DEPENDENT_SERVICES - ) + assert fail.reason == CanDisableFailureReason.ACTIVE_DEPENDENT_SERVICES assert fail.message is None @mock.patch("uaclient.entitlements.entitlement_factory") @@ -179,14 +185,14 @@ """When status is INACTIVE, can_disable returns False.""" entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.ENABLED, ""), + application_status=(ApplicationStatus.ENABLED, ""), dependent_services=("test",), ) m_ent_cls = mock.Mock() m_ent_obj = m_ent_cls.return_value m_ent_obj.application_status.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, None, ) m_ent_factory.return_value = m_ent_cls @@ -201,7 +207,7 @@ """When entitlement is ENABLED, can_disable returns True.""" entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.ENABLED, ""), + application_status=(ApplicationStatus.ENABLED, ""), ) assert entitlement.can_disable() @@ -217,7 +223,7 @@ can_enable, reason = entitlement.can_enable() assert not can_enable - assert reason.reason == status.CanEnableFailureReason.NOT_ENTITLED + assert reason.reason == CanEnableFailureReason.NOT_ENTITLED assert ( reason.message.msg == messages.UNENTITLED.format( @@ -252,14 +258,14 @@ self, concrete_entitlement_factory ): """When entitlement is ENABLED, can_enable returns False.""" - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED entitlement = concrete_entitlement_factory( entitled=True, application_status=(application_status, "") ) can_enable, reason = entitlement.can_enable() assert not can_enable - assert reason.reason == status.CanEnableFailureReason.ALREADY_ENABLED + assert reason.reason == CanEnableFailureReason.ALREADY_ENABLED assert ( reason.message.msg == messages.ALREADY_ENABLED.format( @@ -274,15 +280,15 @@ entitlement = concrete_entitlement_factory( entitled=True, applicability_status=( - status.ApplicabilityStatus.INAPPLICABLE, + ApplicabilityStatus.INAPPLICABLE, "msg", ), - application_status=(status.ApplicationStatus.DISABLED, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) can_enable, reason = entitlement.can_enable() assert not can_enable - assert reason.reason == status.CanEnableFailureReason.INAPPLICABLE + assert reason.reason == CanEnableFailureReason.INAPPLICABLE assert reason.message == "msg" def test_can_enable_true_on_entitlement_inactive( @@ -291,8 +297,8 @@ """When an entitlement is applicable and disabled, we can_enable""" entitlement = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) can_enable, reason = entitlement.can_enable() @@ -309,8 +315,8 @@ feature_overrides = {"allow_beta": allow_beta_cfg} entitlement = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), feature_overrides=feature_overrides, allow_beta=allow_beta, ) @@ -322,7 +328,7 @@ assert reason is None else: assert not can_enable - assert reason.reason == status.CanEnableFailureReason.IS_BETA + assert reason.reason == CanEnableFailureReason.IS_BETA assert reason.message is None def test_contract_status_entitled(self, concrete_entitlement_factory): @@ -345,8 +351,8 @@ """When orig_acccess dict is empty perform no work.""" entitlement = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) with mock.patch.object(entitlement, "can_disable") as m_can_disable: entitlement.process_contract_deltas(orig_access, delta) @@ -357,14 +363,14 @@ ): base_ent = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) m_entitlement_cls = mock.MagicMock() m_entitlement_obj = m_entitlement_cls.return_value m_entitlement_obj.application_status.return_value = [ - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, "", ] base_ent._incompatible_services = ( @@ -376,9 +382,7 @@ ret, reason = base_ent.can_enable() assert ret is False - assert ( - reason.reason == status.CanEnableFailureReason.INCOMPATIBLE_SERVICE - ) + assert reason.reason == CanEnableFailureReason.INCOMPATIBLE_SERVICE assert reason.message is None def test_can_enable_when_required_service_found( @@ -386,15 +390,15 @@ ): base_ent = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) base_ent._required_services = ["test"] m_entitlement_cls = mock.MagicMock() m_entitlement_obj = m_entitlement_cls.return_value m_entitlement_obj.application_status.return_value = [ - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ] type(m_entitlement_obj).title = mock.PropertyMock(return_value="test") @@ -407,8 +411,7 @@ assert ret is False assert ( - reason.reason - == status.CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES + reason.reason == CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES ) assert reason.message is None @@ -431,14 +434,14 @@ base_ent = concrete_entitlement_factory( entitled=True, enable=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) m_entitlement_cls = mock.MagicMock() m_entitlement_obj = m_entitlement_cls.return_value m_entitlement_obj.application_status.return_value = [ - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, "", ] base_ent._incompatible_services = ( @@ -454,7 +457,7 @@ expected_prompt_call = 0 expected_ret = False - expected_reason = status.CanEnableFailureReason.INCOMPATIBLE_SERVICE + expected_reason = CanEnableFailureReason.INCOMPATIBLE_SERVICE if assume_yes and not block_disable_on_enable: expected_ret = True expected_reason = None @@ -478,15 +481,15 @@ base_ent = concrete_entitlement_factory( entitled=True, enable=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) base_ent._required_services = ("test",) m_entitlement_cls = mock.MagicMock() m_entitlement_obj = m_entitlement_cls.return_value m_entitlement_obj.application_status.return_value = [ - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ] m_entitlement_obj.enable.return_value = (True, "") @@ -501,9 +504,7 @@ expected_prompt_call = 1 expected_ret = False - expected_reason = ( - status.CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES - ) + expected_reason = CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES if assume_yes: expected_ret = True expected_reason = None @@ -521,42 +522,38 @@ "can_enable_fail,handle_incompat_calls,enable_req_calls", [ ( - status.CanEnableFailure( - status.CanEnableFailureReason.NOT_ENTITLED, message="msg" + CanEnableFailure( + CanEnableFailureReason.NOT_ENTITLED, message="msg" ), 0, 0, ), ( - status.CanEnableFailure( - status.CanEnableFailureReason.ALREADY_ENABLED, + CanEnableFailure( + CanEnableFailureReason.ALREADY_ENABLED, message="msg", ), 0, 0, ), ( - status.CanEnableFailure(status.CanEnableFailureReason.IS_BETA), + CanEnableFailure(CanEnableFailureReason.IS_BETA), 0, 0, ), ( - status.CanEnableFailure( - status.CanEnableFailureReason.INAPPLICABLE, "msg" - ), + CanEnableFailure(CanEnableFailureReason.INAPPLICABLE, "msg"), 0, 0, ), ( - status.CanEnableFailure( - status.CanEnableFailureReason.INCOMPATIBLE_SERVICE - ), + CanEnableFailure(CanEnableFailureReason.INCOMPATIBLE_SERVICE), 1, 0, ), ( - status.CanEnableFailure( - status.CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES + CanEnableFailure( + CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES ), 0, 1, @@ -612,8 +609,8 @@ ): m_handle_msg.return_value = True - fail_reason = status.CanEnableFailure( - status.CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES + fail_reason = CanEnableFailure( + CanEnableFailureReason.INACTIVE_REQUIRED_SERVICES ) if enable_fail_message: @@ -621,15 +618,15 @@ else: msg = None - enable_fail_reason = status.CanEnableFailure( - status.CanEnableFailureReason.NOT_ENTITLED, message=msg + enable_fail_reason = CanEnableFailure( + CanEnableFailureReason.NOT_ENTITLED, message=msg ) m_ent_cls = mock.Mock() m_ent_obj = m_ent_cls.return_value m_ent_obj.enable.return_value = (False, enable_fail_reason) m_ent_obj.application_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, None, ) type(m_ent_obj).title = mock.PropertyMock(return_value="Test") @@ -639,7 +636,7 @@ entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.DISABLED, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) entitlement._required_services = "test" @@ -667,15 +664,15 @@ {"entitlement": {"entitled": False}}, { "entitlement": { - "entitled": False, # overridden True by series trusty - "series": {"trusty": {"entitled": True}}, + "entitled": False, # overridden by series 'example' + "series": {"example": {"entitled": True}}, } }, ), ), ) @mock.patch( - "uaclient.util.get_platform_info", return_value={"series": "trusty"} + "uaclient.util.get_platform_info", return_value={"series": "example"} ) def test_process_contract_deltas_does_nothing_when_delta_remains_entitled( self, m_platform_info, concrete_entitlement_factory, orig_access, delta @@ -683,12 +680,12 @@ """If deltas do not represent transition to unentitled, do nothing.""" entitlement = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) entitlement.process_contract_deltas(orig_access, delta) assert ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, mock.ANY, ) == entitlement.application_status() @@ -713,7 +710,7 @@ """Only clear cache when deltas transition inactive to unentitled.""" entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.DISABLED, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) entitlement.process_contract_deltas(orig_access, delta) # If an entitlement is disabled, we don't need to tell the user @@ -745,11 +742,11 @@ """Disable when deltas transition from active to unentitled.""" entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.ENABLED, ""), + application_status=(ApplicationStatus.ENABLED, ""), ) entitlement.process_contract_deltas(orig_access, delta) assert ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, mock.ANY, ) == entitlement.application_status() @@ -779,8 +776,8 @@ """Disable when deltas transition from active to unentitled.""" entitlement = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), - application_status=(status.ApplicationStatus.DISABLED, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), + application_status=(ApplicationStatus.DISABLED, ""), ) entitlement.is_beta = True assert not entitlement.allow_beta @@ -800,14 +797,14 @@ base_ent = concrete_entitlement_factory( entitled=True, disable=True, - application_status=(status.ApplicationStatus.ENABLED, ""), + application_status=(ApplicationStatus.ENABLED, ""), dependent_services=("test",), ) m_entitlement_cls = mock.MagicMock() m_entitlement_obj = m_entitlement_cls.return_value m_entitlement_obj.application_status.return_value = [ - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, "", ] m_entitlement_obj.disable.return_value = (True, None) @@ -842,23 +839,23 @@ ): m_handle_msg.return_value = True - fail_reason = status.CanDisableFailure( - status.CanDisableFailureReason.ACTIVE_DEPENDENT_SERVICES + fail_reason = CanDisableFailure( + CanDisableFailureReason.ACTIVE_DEPENDENT_SERVICES ) if disable_fail_message: msg = messages.NamedMessage("test-code", disable_fail_message) else: msg = None - disable_fail_reason = status.CanDisableFailure( - status.CanDisableFailureReason.ALREADY_DISABLED, message=msg + disable_fail_reason = CanDisableFailure( + CanDisableFailureReason.ALREADY_DISABLED, message=msg ) m_ent_cls = mock.Mock() m_ent_obj = m_ent_cls.return_value m_ent_obj.disable.return_value = (False, disable_fail_reason) m_ent_obj.application_status.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, None, ) type(m_ent_obj).title = mock.PropertyMock(return_value="Test") @@ -868,7 +865,7 @@ entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.DISABLED, ""), + application_status=(ApplicationStatus.DISABLED, ""), dependent_services=("test"), ) @@ -891,15 +888,15 @@ ): m_handle_msg.return_value = True - fail_reason = status.CanDisableFailure( - status.CanDisableFailureReason.ACTIVE_DEPENDENT_SERVICES + fail_reason = CanDisableFailure( + CanDisableFailureReason.ACTIVE_DEPENDENT_SERVICES ) m_ent_factory.side_effect = EntitlementNotFoundError() entitlement = concrete_entitlement_factory( entitled=True, - application_status=(status.ApplicationStatus.DISABLED, ""), + application_status=(ApplicationStatus.DISABLED, ""), dependent_services=("test"), ) @@ -955,13 +952,13 @@ entitlement = concrete_entitlement_factory( entitled=True, applicability_status=( - status.ApplicabilityStatus.INAPPLICABLE, + ApplicabilityStatus.INAPPLICABLE, msg, ), ) user_facing_status, details = entitlement.user_facing_status() - assert status.UserFacingStatus.INAPPLICABLE == user_facing_status + assert UserFacingStatus.INAPPLICABLE == user_facing_status assert msg == details def test_unavailable_when_applicable_but_not_entitled( @@ -970,11 +967,11 @@ entitlement = concrete_entitlement_factory( entitled=False, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), ) user_facing_status, details = entitlement.user_facing_status() - assert status.UserFacingStatus.UNAVAILABLE == user_facing_status + assert UserFacingStatus.UNAVAILABLE == user_facing_status expected_details = "{} is not entitled".format(entitlement.title) assert expected_details == details.msg @@ -984,22 +981,22 @@ entitlement = concrete_entitlement_factory( entitled=False, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), ) entitlement.cfg._entitlements = {} user_facing_status, details = entitlement.user_facing_status() - assert status.UserFacingStatus.UNAVAILABLE == user_facing_status + assert UserFacingStatus.UNAVAILABLE == user_facing_status expected_details = "{} is not entitled".format(entitlement.title) assert expected_details == details.msg @pytest.mark.parametrize( "application_status,expected_uf_status", ( - (status.ApplicationStatus.ENABLED, status.UserFacingStatus.ACTIVE), + (ApplicationStatus.ENABLED, UserFacingStatus.ACTIVE), ( - status.ApplicationStatus.DISABLED, - status.UserFacingStatus.INACTIVE, + ApplicationStatus.DISABLED, + UserFacingStatus.INACTIVE, ), ), ) @@ -1012,7 +1009,7 @@ msg = "application status details" entitlement = concrete_entitlement_factory( entitled=True, - applicability_status=(status.ApplicabilityStatus.APPLICABLE, ""), + applicability_status=(ApplicabilityStatus.APPLICABLE, ""), application_status=(application_status, msg), ) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_cc.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_cc.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_cc.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_cc.py 2022-05-18 19:44:15.000000000 +0000 @@ -8,7 +8,7 @@ import mock import pytest -from uaclient import apt, config, status +from uaclient import apt, config, messages, status from uaclient.entitlements.cc import CC_README, CommonCriteriaEntitlement from uaclient.entitlements.tests.conftest import machine_token @@ -54,10 +54,10 @@ ), ( "s390x", - "trusty", - "14.04 LTS (Trusty Tahr)", - "CC EAL2 is not available for Ubuntu 14.04 LTS" - " (Trusty Tahr).", + "bionic", + "18.04 LTS (Bionic Beaver)", + "CC EAL2 is not available for Ubuntu 18.04 LTS" + " (Bionic Beaver).", ), ), ) @@ -109,12 +109,14 @@ @mock.patch("uaclient.apt.setup_apt_proxy") @mock.patch("uaclient.util.should_reboot") @mock.patch("uaclient.util.subp") + @mock.patch("uaclient.apt.run_apt_cache_policy_command") @mock.patch("uaclient.util.get_platform_info") @mock.patch("uaclient.util.apply_contract_overrides") def test_enable_configures_apt_sources_and_auth_files( self, _m_contract_overrides, m_platform_info, + m_apt_cache_policy, m_subp, m_should_reboot, m_setup_apt_proxy, @@ -125,6 +127,7 @@ ): """When entitled, configure apt repo auth token, pinning and url.""" m_subp.return_value = ("fakeout", "") + m_apt_cache_policy.return_value = "fakeout" m_should_reboot.return_value = False original_exists = os.path.exists @@ -166,12 +169,9 @@ ) ] - subp_apt_cmds = [ + apt_cache_policy_cmds = [ mock.call( - ["apt-cache", "policy"], - capture=True, - retry_sleeps=apt.APT_RETRIES, - env={}, + error_msg=messages.APT_POLICY_FAILED.msg, ) ] @@ -181,6 +181,7 @@ if ca_certificates: prerequisite_pkgs.append("ca-certificates") + subp_apt_cmds = [] if prerequisite_pkgs: expected_stdout = "Installing prerequisites: {}\n".format( ", ".join(prerequisite_pkgs) @@ -226,6 +227,8 @@ assert [] == m_add_pin.call_args_list assert 1 == m_setup_apt_proxy.call_count assert 1 == m_should_reboot.call_count + assert 1 == m_apt_cache_policy.call_count + assert apt_cache_policy_cmds == m_apt_cache_policy.call_args_list assert subp_apt_cmds == m_subp.call_args_list expected_stdout += "\n".join( [ diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_cis.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_cis.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_cis.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_cis.py 2022-05-18 19:44:15.000000000 +0000 @@ -3,8 +3,9 @@ import mock import pytest -from uaclient import apt, status +from uaclient import apt, messages from uaclient.entitlements.cis import CIS_DOCS_URL, CISEntitlement +from uaclient.entitlements.entitlement_status import ApplicationStatus M_REPOPATH = "uaclient.entitlements.repo." @@ -28,13 +29,14 @@ with mock.patch.object( entitlement, "application_status", - return_value=(status.ApplicationStatus.DISABLED, ""), + return_value=(ApplicationStatus.DISABLED, ""), ): assert entitlement.can_enable() assert ("", "") == capsys.readouterr() class TestCISEntitlementEnable: + @mock.patch("uaclient.apt.run_apt_cache_policy_command") @mock.patch("uaclient.apt.setup_apt_proxy") @mock.patch("uaclient.util.should_reboot") @mock.patch("uaclient.util.subp") @@ -45,6 +47,7 @@ m_subp, m_should_reboot, m_setup_apt_proxy, + m_apt_policy, capsys, entitlement, ): @@ -58,6 +61,7 @@ m_platform_info.side_effect = fake_platform m_subp.return_value = ("fakeout", "") + m_apt_policy.return_value = "fakeout" m_should_reboot.return_value = False with mock.patch( @@ -77,13 +81,13 @@ ) ] - subp_apt_cmds = [ + m_apt_policy_cmds = [ mock.call( - ["apt-cache", "policy"], - capture=True, - retry_sleeps=apt.APT_RETRIES, - env={}, + error_msg=messages.APT_POLICY_FAILED.msg, ), + ] + + subp_apt_cmds = [ mock.call( ["apt-get", "update"], capture=True, @@ -111,6 +115,8 @@ assert [] == m_add_pin.call_args_list assert 1 == m_setup_apt_proxy.call_count assert subp_apt_cmds == m_subp.call_args_list + assert 1 == m_apt_policy.call_count + assert m_apt_policy_cmds == m_apt_policy.call_args_list assert 1 == m_should_reboot.call_count expected_stdout = ( "Updating package lists\n" diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_entitlements.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_entitlements.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_entitlements.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_entitlements.py 2022-05-18 19:44:15.000000000 +0000 @@ -11,7 +11,12 @@ @pytest.mark.parametrize("is_beta", ((True), (False))) @mock.patch("uaclient.entitlements.is_config_value_true") def test_valid_services( - self, m_is_config_value, show_all_names, allow_beta, is_beta + self, + m_is_config_value, + show_all_names, + allow_beta, + is_beta, + FakeConfig, ): m_is_config_value.return_value = allow_beta @@ -39,12 +44,12 @@ expected_services.append("othername") assert expected_services == entitlements.valid_services( - all_names=show_all_names + cfg=FakeConfig(), all_names=show_all_names ) class TestEntitlementFactory: - def test_entitlement_factory(self): + def test_entitlement_factory(self, FakeConfig): m_cls_1 = mock.MagicMock() m_cls_1.return_value.valid_names = ["ent1", "othername"] @@ -52,9 +57,14 @@ m_cls_2.return_value.valid_names = ["ent2"] ents = {m_cls_1, m_cls_2} + cfg = FakeConfig() with mock.patch.object(entitlements, "ENTITLEMENT_CLASSES", ents): - assert m_cls_1 == entitlements.entitlement_factory("othername") - assert m_cls_2 == entitlements.entitlement_factory("ent2") + assert m_cls_1 == entitlements.entitlement_factory( + cfg=cfg, name="othername" + ) + assert m_cls_2 == entitlements.entitlement_factory( + cfg=cfg, name="ent2" + ) with pytest.raises(exceptions.EntitlementNotFoundError): - entitlements.entitlement_factory("nonexistent") + entitlements.entitlement_factory(cfg=cfg, name="nonexistent") diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_esm.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_esm.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_esm.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_esm.py 2022-05-18 19:44:15.000000000 +0000 @@ -14,14 +14,13 @@ @pytest.fixture(params=[ESMAppsEntitlement, ESMInfraEntitlement]) def entitlement(request, entitlement_factory): - return entitlement_factory(request.param, suites=["trusty"]) + return entitlement_factory(request.param, suites=["xenial"]) class TestESMRepoPinPriority: @pytest.mark.parametrize( "series, is_active_esm, repo_pin_priority", ( - ("trusty", True, "never"), ("xenial", True, "never"), ("bionic", False, None), ("focal", False, None), @@ -52,32 +51,28 @@ assert [mock.call(series)] == m_is_active_esm.call_args_list @pytest.mark.parametrize( - "series, is_lts, is_beta, cfg_allow_beta, repo_pin_priority", + "series, is_beta, repo_pin_priority", ( - # When esm non-beta pin it on non-trusty - ("trusty", True, False, None, None), - ("xenial", True, False, None, "never"), - ("bionic", True, False, None, "never"), - ("focal", True, False, None, "never"), + # When esm non-beta pin it + ("xenial", False, "never"), + ("bionic", False, "never"), + ("focal", False, "never"), # when ESM beta don't pin - ("trusty", True, True, None, None), - ("xenial", True, True, None, None), - ("bionic", True, True, None, None), - ("focal", True, True, None, None), + ("xenial", True, None), + ("bionic", True, None), + ("focal", True, None), ), ) @mock.patch("uaclient.util.is_lts") @mock.patch("uaclient.entitlements.esm.util.get_platform_info") @mock.patch("uaclient.entitlements.UAConfig") - def test_esm_apps_repo_pin_priority_never_on_on_lts( + def test_esm_apps_repo_pin_priority_never_on_lts( self, m_cfg, m_get_platform_info, m_is_lts, series, - is_lts, is_beta, - cfg_allow_beta, repo_pin_priority, FakeConfig, ): @@ -89,11 +84,9 @@ when the release is an Ubuntu LTS release. We won't want/need to advertize ESM Apps packages on non-LTS releases or if ESM Apps is beta. """ - m_is_lts.return_value = is_lts + m_is_lts.return_value = True m_get_platform_info.return_value = {"series": series} cfg = FakeConfig.for_attached_machine() - if cfg_allow_beta: - cfg.override_features({"allow_beta": cfg_allow_beta}) m_cfg.return_value = cfg inst = ESMAppsEntitlement(cfg) @@ -101,9 +94,9 @@ assert repo_pin_priority == inst.repo_pin_priority is_lts_calls = [] - if series != "trusty": - if cfg_allow_beta or not is_beta: - is_lts_calls = [mock.call(series)] + if not is_beta: + is_lts_calls = [mock.call(series)] + assert is_lts_calls == m_is_lts.call_args_list @@ -111,7 +104,6 @@ @pytest.mark.parametrize( "series, is_active_esm, disable_apt_auth_only", ( - ("trusty", True, True), ("xenial", True, True), ("bionic", False, False), ("focal", False, False), @@ -136,8 +128,6 @@ @pytest.mark.parametrize( "series, is_lts, is_beta, cfg_allow_beta, disable_apt_auth_only", ( - ("trusty", True, True, None, False), - ("trusty", True, False, True, False), # trusty always false ("xenial", True, True, None, False), # is_beta disables ("xenial", True, False, False, True), # not beta service succeeds ("xenial", True, True, True, True), # cfg allow_true overrides @@ -171,10 +161,11 @@ inst = ESMAppsEntitlement(cfg) with mock.patch.object(ESMAppsEntitlement, "is_beta", is_beta): assert disable_apt_auth_only is inst.disable_apt_auth_only + is_lts_calls = [] - if series != "trusty": - if cfg_allow_beta or not is_beta: - is_lts_calls = [mock.call(series)] + if cfg_allow_beta or not is_beta: + is_lts_calls = [mock.call(series)] + assert is_lts_calls == m_is_lts.call_args_list @@ -197,12 +188,12 @@ entitlement = entitlement_factory( esm_cls, cfg_extension={ - "ua_config": { + "ua_config": { # intentionally using apt_* "apt_http_proxy": "apt_http_proxy_value", "apt_https_proxy": "apt_https_proxy_value", } }, - suites=["trusty"], + suites=["xenial"], ) patched_packages = ["a", "b"] original_exists = os.path.exists @@ -232,7 +223,7 @@ mock.patch.object(entitlement, "can_enable") ) stack.enter_context( - mock.patch(M_GETPLATFORM, return_value={"series": "trusty"}) + mock.patch(M_GETPLATFORM, return_value={"series": "xenial"}) ) stack.enter_context( mock.patch( @@ -260,7 +251,7 @@ ), "http://{}".format(entitlement.name.upper()), "{}-token".format(entitlement.name), - ["trusty"], + ["xenial"], entitlement.repo_key_file, ) ] @@ -286,6 +277,7 @@ mock.call( http_proxy="apt_http_proxy_value", https_proxy="apt_https_proxy_value", + proxy_scope=apt.AptProxyScope.GLOBAL, ) ] == m_setup_apt_proxy.call_args_list assert add_apt_calls == m_add_apt.call_args_list @@ -350,7 +342,7 @@ mock.patch.object(entitlement, "remove_apt_config") ) stack.enter_context( - mock.patch(M_GETPLATFORM, return_value={"series": "trusty"}) + mock.patch(M_GETPLATFORM, return_value={"series": "xenial"}) ) stack.enter_context( mock.patch( @@ -373,7 +365,7 @@ ), "http://{}".format(entitlement.name.upper()), "{}-token".format(entitlement.name), - ["trusty"], + ["xenial"], entitlement.repo_key_file, ) ] @@ -393,7 +385,7 @@ assert 0 == m_add_pinning.call_count assert subp_calls == m_subp.call_args_list if entitlement.name == "esm-infra": - # Enable esm-infra trusty removes apt preferences pin 'never' file + # Enable esm-infra xenial removes apt preferences pin 'never' file unlink_calls = [ mock.call( "/etc/apt/preferences.d/ubuntu-{}".format(entitlement.name) @@ -430,7 +422,7 @@ assert 0 == m_remove_apt.call_count @mock.patch( - "uaclient.util.get_platform_info", return_value={"series": "trusty"} + "uaclient.util.get_platform_info", return_value={"series": "xenial"} ) def test_disable_on_can_disable_true_removes_apt_config( self, _m_platform_info, m_update_apt_and_motd_msgs, entitlement, tmpdir diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_fips.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_fips.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_fips.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_fips.py 2022-05-18 19:44:15.000000000 +0000 @@ -10,8 +10,13 @@ import mock import pytest -from uaclient import apt, defaults, exceptions, messages, status, util +from uaclient import apt, defaults, exceptions, messages, util from uaclient.clouds.identity import NoCloudTypeReason +from uaclient.entitlements.entitlement_status import ( + ApplicabilityStatus, + ApplicationStatus, + CanEnableFailureReason, +) from uaclient.entitlements.fips import ( CONDITIONAL_PACKAGES_EVERYWHERE, CONDITIONAL_PACKAGES_OPENSSH_HMAC, @@ -21,7 +26,6 @@ FIPSEntitlement, FIPSUpdatesEntitlement, ) -from uaclient.status import CanEnableFailureReason M_PATH = "uaclient.entitlements.fips." M_LIVEPATCH_PATH = "uaclient.entitlements.livepatch.LivepatchEntitlement." @@ -126,7 +130,7 @@ util.prompt_for_confirmation, { "assume_yes": assume_yes, - "msg": status.PROMPT_FIPS_PRE_ENABLE, + "msg": messages.PROMPT_FIPS_PRE_ENABLE, }, ) ], @@ -136,7 +140,7 @@ util.prompt_for_confirmation, { "assume_yes": assume_yes, - "msg": status.PROMPT_FIPS_PRE_DISABLE, + "msg": messages.PROMPT_FIPS_PRE_DISABLE, }, ) ], @@ -146,7 +150,7 @@ ( util.prompt_for_confirmation, { - "msg": status.PROMPT_FIPS_UPDATES_PRE_ENABLE, + "msg": messages.PROMPT_FIPS_UPDATES_PRE_ENABLE, "assume_yes": assume_yes, }, ) @@ -157,7 +161,7 @@ util.prompt_for_confirmation, { "assume_yes": assume_yes, - "msg": status.PROMPT_FIPS_PRE_DISABLE, + "msg": messages.PROMPT_FIPS_PRE_DISABLE, }, ) ], @@ -183,7 +187,7 @@ util.prompt_for_confirmation, { "assume_yes": False, - "msg": status.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( # noqa: E501 + "msg": messages.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( # noqa: E501 title="FIPS" ), }, @@ -195,7 +199,7 @@ util.prompt_for_confirmation, { "assume_yes": False, - "msg": status.PROMPT_FIPS_PRE_DISABLE, + "msg": messages.PROMPT_FIPS_PRE_DISABLE, }, ) ], @@ -205,7 +209,7 @@ ( util.prompt_for_confirmation, { - "msg": status.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( # noqa: E501 + "msg": messages.PROMPT_FIPS_CONTAINER_PRE_ENABLE.format( # noqa: E501 title="FIPS Updates" ), "assume_yes": False, @@ -218,7 +222,7 @@ util.prompt_for_confirmation, { "assume_yes": False, - "msg": status.PROMPT_FIPS_PRE_DISABLE, + "msg": messages.PROMPT_FIPS_PRE_DISABLE, }, ) ], @@ -240,12 +244,12 @@ with mock.patch.object( entitlement, "applicability_status", - return_value=(status.ApplicabilityStatus.APPLICABLE, ""), + return_value=(ApplicabilityStatus.APPLICABLE, ""), ): with mock.patch.object( entitlement, "application_status", - return_value=(status.ApplicationStatus.DISABLED, ""), + return_value=(ApplicationStatus.DISABLED, ""), ): with mock.patch.object( entitlement, @@ -425,7 +429,12 @@ [ ( True, - [mock.call("", status.NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD)], + [ + mock.call( + "", messages.NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD + ), + mock.call("", messages.FIPS_REBOOT_REQUIRED_MSG), + ], ), (False, []), ], @@ -443,7 +452,7 @@ m_repo_enable.return_value = repo_enable_return_value assert repo_enable_return_value is entitlement._perform_enable() assert ( - expected_remove_notice_calls == m_remove_notice.call_args_list[:1] + expected_remove_notice_calls == m_remove_notice.call_args_list[:2] ) @mock.patch("uaclient.apt.setup_apt_proxy") @@ -591,10 +600,10 @@ with mock.patch.object( fips_ent, "applicability_status", - return_value=(status.ApplicabilityStatus.APPLICABLE, ""), + return_value=(ApplicabilityStatus.APPLICABLE, ""), ): m_livepatch.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, "", ) ret, fail = fips_ent.enable() @@ -606,7 +615,7 @@ @mock.patch("uaclient.util.handle_message_operations") @mock.patch( M_LIVEPATCH_PATH + "application_status", - return_value=((status.ApplicationStatus.DISABLED, "")), + return_value=((ApplicationStatus.DISABLED, "")), ) @mock.patch("uaclient.util.is_container", return_value=False) def test_enable_fails_when_fips_update_service_is_enabled( @@ -628,7 +637,7 @@ ) as m_allow_fips_on_cloud: m_allow_fips_on_cloud.return_value = True m_fips_update.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, "", ) result, reason = fips_entitlement.enable() @@ -641,7 +650,7 @@ @mock.patch("uaclient.util.handle_message_operations") @mock.patch( M_LIVEPATCH_PATH + "application_status", - return_value=((status.ApplicationStatus.DISABLED, "")), + return_value=((ApplicationStatus.DISABLED, "")), ) @mock.patch("uaclient.util.is_container", return_value=False) def test_enable_fails_when_fips_updates_service_once_enabled( @@ -680,18 +689,18 @@ entitlement, ): m_handle_message_op.return_value = True - m_cloud_type.return_value = ("azure", None) + m_cloud_type.return_value = ("gce", None) m_platform_info.return_value = {"series": "xenial"} base_path = "uaclient.entitlements.livepatch.LivepatchEntitlement" with mock.patch( "{}.application_status".format(base_path) ) as m_livepatch: - m_livepatch.return_value = (status.ApplicationStatus.DISABLED, "") + m_livepatch.return_value = (ApplicationStatus.DISABLED, "") result, reason = entitlement.enable() assert not result expected_msg = """\ - Ubuntu Xenial does not provide an Azure optimized FIPS kernel""" + Ubuntu Xenial does not provide a GCP optimized FIPS kernel""" assert expected_msg.strip() in reason.message.msg.strip() @mock.patch("uaclient.util.get_platform_info") @@ -722,7 +731,7 @@ "{}.application_status".format(base_path) ) as m_fips_status: m_fips_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ) result, reason = entitlement.enable() @@ -731,7 +740,9 @@ Ubuntu Test does not provide a GCP optimized FIPS kernel""" assert expected_msg.strip() in reason.message.msg.strip() - @pytest.mark.parametrize("allow_xenial_fips_on_cloud", ((True), (False))) + @pytest.mark.parametrize( + "allow_default_fips_metapackage_on_gcp", ((True), (False)) + ) @pytest.mark.parametrize("cloud_id", (("aws"), ("gce"), ("azure"), (None))) @pytest.mark.parametrize("series", (("xenial"), ("bionic"))) @mock.patch("uaclient.util.is_config_value_true") @@ -740,12 +751,12 @@ m_is_config_value_true, series, cloud_id, - allow_xenial_fips_on_cloud, + allow_default_fips_metapackage_on_gcp, entitlement, ): def mock_config_value(config, path_to_value): - if "allow_xenial_fips_on_cloud" in path_to_value: - return allow_xenial_fips_on_cloud + if "allow_default_fips_metapackage_on_gcp" in path_to_value: + return allow_default_fips_metapackage_on_gcp return False @@ -754,17 +765,19 @@ cloud_id=cloud_id, series=series ) - if cloud_id == "aws" or cloud_id is None: - assert actual_value - elif cloud_id == "gce" and series != "bionic": - assert not actual_value - elif cloud_id == "gce": + if cloud_id in ("azure", "aws") or cloud_id is None: assert actual_value - elif all([allow_xenial_fips_on_cloud, series == "xenial"]): + elif all([cloud_id == "gce", allow_default_fips_metapackage_on_gcp]): assert actual_value - elif series == "xenial": + elif all( + [ + cloud_id == "gce", + not allow_default_fips_metapackage_on_gcp, + series == "xenial", + ] + ): assert not actual_value - else: + elif cloud_id == "gce": assert actual_value @pytest.mark.parametrize( @@ -912,11 +925,7 @@ class TestFIPSEntitlementApplicationStatus: @pytest.mark.parametrize( "super_application_status", - [ - s - for s in status.ApplicationStatus - if s is not status.ApplicationStatus.ENABLED - ], + [s for s in ApplicationStatus if s is not ApplicationStatus.ENABLED], ) def test_non_enabled_passed_through( self, entitlement, super_application_status @@ -954,6 +963,9 @@ "", messages.FIPS_SYSTEM_REBOOT_REQUIRED.msg ) + if path_exists: + entitlement.cfg.add_notice("", messages.FIPS_REBOOT_REQUIRED_MSG) + if proc_content == "0": entitlement.cfg.add_notice( "", messages.FIPS_DISABLE_REBOOT_REQUIRED @@ -961,17 +973,18 @@ with mock.patch( M_PATH + "repo.RepoEntitlement.application_status", - return_value=(status.ApplicationStatus.ENABLED, msg), + return_value=(ApplicationStatus.ENABLED, msg), ): with mock.patch("uaclient.util.load_file") as m_load_file: m_load_file.side_effect = fake_load_file with mock.patch("os.path.exists") as m_path_exists: m_path_exists.side_effect = fake_exists - actual_status, actual_msg = ( - entitlement.application_status() - ) + ( + actual_status, + actual_msg, + ) = entitlement.application_status() - expected_status = status.ApplicationStatus.ENABLED + expected_status = ApplicationStatus.ENABLED if path_exists and proc_content == "1": expected_msg = msg assert entitlement.cfg.read_cache("notices") is None @@ -979,9 +992,9 @@ expected_msg = messages.FIPS_PROC_FILE_ERROR.format( file_name=entitlement.FIPS_PROC_FILE ) - expected_status = status.ApplicationStatus.DISABLED + expected_status = ApplicationStatus.DISABLED assert [ - ["", status.NOTICE_FIPS_MANUAL_DISABLE_URL] + ["", messages.NOTICE_FIPS_MANUAL_DISABLE_URL] ] == entitlement.cfg.read_cache("notices") else: expected_msg = messages.FIPS_REBOOT_REQUIRED @@ -996,18 +1009,20 @@ def test_fips_does_not_show_enabled_when_fips_updates_is( self, entitlement ): - with mock.patch(M_PATH + "util.subp") as m_subp: - m_subp.return_value = ( + with mock.patch( + "uaclient.apt.run_apt_cache_policy_command" + ) as m_apt_policy: + m_apt_policy.return_value = ( "1001 http://FIPS-UPDATES/ubuntu" - " xenial-updates/main amd64 Packages\n", - "", + " xenial-updates/main amd64 Packages\n" + "" ) application_status, _ = entitlement.application_status() - expected_status = status.ApplicationStatus.DISABLED + expected_status = ApplicationStatus.DISABLED if isinstance(entitlement, FIPSUpdatesEntitlement): - expected_status = status.ApplicationStatus.ENABLED + expected_status = ApplicationStatus.ENABLED assert expected_status == application_status @@ -1176,21 +1191,19 @@ assert before == after @pytest.mark.parametrize( - "cfg_disable_fips_metapckage_override", ((True), (False)) - ) - @pytest.mark.parametrize( - "series", (("trusty"), ("xenial"), ("bionic"), ("focal")) + "cfg_disable_fips_metapckage_override", (True, False) ) + @pytest.mark.parametrize("series", ("xenial", "bionic", "focal")) @pytest.mark.parametrize( "cloud_id", ( - ("azure-china"), - ("aws-gov"), - ("aws-china"), - ("azure"), - ("aws"), - ("gce"), - (None), + "azure-china", + "aws-gov", + "aws-china", + "azure", + "aws", + "gce", + None, ), ) @mock.patch("uaclient.util.is_config_value_true") @@ -1292,18 +1305,18 @@ with mock.patch.object( entitlement, "applicability_status", - return_value=(status.ApplicabilityStatus.APPLICABLE, ""), + return_value=(ApplicabilityStatus.APPLICABLE, ""), ): with mock.patch.object( entitlement, "application_status", - return_value=(status.ApplicationStatus.DISABLED, ""), + return_value=(ApplicationStatus.DISABLED, ""), ): with mock.patch( M_PATH + "FIPSEntitlement.application_status" ) as m_fips_status: m_fips_status.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, None, ) actual_ret, reason = entitlement.can_enable() diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_livepatch.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_livepatch.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_livepatch.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_livepatch.py 2022-05-18 19:44:15.000000000 +0000 @@ -10,7 +10,14 @@ import mock import pytest -from uaclient import apt, exceptions, messages, status +from uaclient import apt, exceptions, messages +from uaclient.entitlements.entitlement_status import ( + ApplicabilityStatus, + ApplicationStatus, + CanEnableFailureReason, + ContractStatus, + UserFacingStatus, +) from uaclient.entitlements.livepatch import ( LIVEPATCH_CMD, LivepatchEntitlement, @@ -21,7 +28,6 @@ ) from uaclient.entitlements.tests.conftest import machine_token from uaclient.snap import SNAP_CMD -from uaclient.status import ApplicationStatus, ContractStatus PLATFORM_INFO_SUPPORTED = MappingProxyType( { @@ -34,7 +40,7 @@ M_PATH = "uaclient.entitlements.livepatch." # mock path M_LIVEPATCH_STATUS = M_PATH + "LivepatchEntitlement.application_status" -DISABLED_APP_STATUS = (status.ApplicationStatus.DISABLED, "") +DISABLED_APP_STATUS = (ApplicationStatus.DISABLED, "") M_BASE_PATH = "uaclient.entitlements.base.UAEntitlement." @@ -249,7 +255,7 @@ with mock.patch("uaclient.util.get_platform_info") as m_platform_info: m_platform_info.return_value = PLATFORM_INFO_SUPPORTED uf_status, details = entitlement.user_facing_status() - assert uf_status == status.UserFacingStatus.INAPPLICABLE + assert uf_status == UserFacingStatus.INAPPLICABLE expected_details = ( "Livepatch is not available for Ubuntu 16.04 LTS" " (Xenial Xerus)." @@ -268,7 +274,7 @@ with mock.patch("uaclient.util.get_platform_info") as m_platform_info: m_platform_info.return_value = PLATFORM_INFO_SUPPORTED uf_status, details = entitlement.user_facing_status() - assert uf_status == status.UserFacingStatus.UNAVAILABLE + assert uf_status == UserFacingStatus.UNAVAILABLE assert "Livepatch is not entitled" == details.msg @@ -363,7 +369,7 @@ def test_can_enable_false_on_unsupported_kernel_min_version( self, _m_is_container, _m_livepatch_status, _m_fips_status, entitlement ): - """"False when on a kernel less or equal to minKernelVersion.""" + """False when on a kernel less or equal to minKernelVersion.""" unsupported_min_kernel = copy.deepcopy(dict(PLATFORM_INFO_SUPPORTED)) unsupported_min_kernel["kernel"] = "4.2.9-00-generic" with mock.patch("uaclient.util.get_platform_info") as m_platform: @@ -371,7 +377,7 @@ entitlement = LivepatchEntitlement(entitlement.cfg) result, reason = entitlement.can_enable() assert False is result - assert status.CanEnableFailureReason.INAPPLICABLE == reason.reason + assert CanEnableFailureReason.INAPPLICABLE == reason.reason msg = ( "Livepatch is not available for kernel 4.2.9-00-generic.\n" "Minimum kernel version required: 4.4." @@ -381,7 +387,7 @@ def test_can_enable_false_on_unsupported_kernel_flavor( self, _m_is_container, _m_livepatch_status, _m_fips_status, entitlement ): - """"When on an unsupported kernel, can_enable returns False.""" + """When on an unsupported kernel, can_enable returns False.""" unsupported_kernel = copy.deepcopy(dict(PLATFORM_INFO_SUPPORTED)) unsupported_kernel["kernel"] = "4.4.0-140-notgeneric" with mock.patch("uaclient.util.get_platform_info") as m_platform: @@ -389,7 +395,7 @@ entitlement = LivepatchEntitlement(entitlement.cfg) result, reason = entitlement.can_enable() assert False is result - assert status.CanEnableFailureReason.INAPPLICABLE == reason.reason + assert CanEnableFailureReason.INAPPLICABLE == reason.reason msg = ( "Livepatch is not available for kernel 4.4.0-140-notgeneric.\n" "Supported flavors are: generic, lowlatency." @@ -415,7 +421,7 @@ meets_min_version, entitlement, ): - """"When on an unsupported kernel version, can_enable returns False.""" + """When on an unsupported kernel version, can_enable returns False.""" unsupported_kernel = copy.deepcopy(dict(PLATFORM_INFO_SUPPORTED)) unsupported_kernel["kernel"] = kernel_version with mock.patch("uaclient.util.get_platform_info") as m_platform: @@ -426,9 +432,7 @@ else: result, reason = entitlement.can_enable() assert False is result - assert ( - status.CanEnableFailureReason.INAPPLICABLE == reason.reason - ) + assert CanEnableFailureReason.INAPPLICABLE == reason.reason msg = ( "Livepatch is not available for kernel {}.\n" "Minimum kernel version required: 4.4.".format( @@ -440,14 +444,14 @@ def test_can_enable_false_on_unsupported_architecture( self, _m_is_container, _m_livepatch_status, _m_fips_status, entitlement ): - """"When on an unsupported architecture, can_enable returns False.""" + """When on an unsupported architecture, can_enable returns False.""" unsupported_kernel = copy.deepcopy(dict(PLATFORM_INFO_SUPPORTED)) unsupported_kernel["arch"] = "ppc64le" with mock.patch("uaclient.util.get_platform_info") as m_platform: m_platform.return_value = unsupported_kernel result, reason = entitlement.can_enable() assert False is result - assert status.CanEnableFailureReason.INAPPLICABLE == reason.reason + assert CanEnableFailureReason.INAPPLICABLE == reason.reason msg = ( "Livepatch is not available for platform ppc64le.\n" "Supported platforms are: x86_64." @@ -466,7 +470,7 @@ entitlement = LivepatchEntitlement(entitlement.cfg) result, reason = entitlement.can_enable() assert False is result - assert status.CanEnableFailureReason.INAPPLICABLE == reason.reason + assert CanEnableFailureReason.INAPPLICABLE == reason.reason msg = "Cannot install Livepatch on a container." assert msg == reason.message.msg @@ -492,11 +496,11 @@ ): """When livepatch is INACTIVE return False and do no setup.""" m_applicability_status.return_value = ( - status.ApplicabilityStatus.APPLICABLE, + ApplicabilityStatus.APPLICABLE, "", ) m_application_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ) deltas = {"entitlement": {"directives": {"caCerts": "new"}}} @@ -523,7 +527,7 @@ process_token, ): """Run setup when livepatch ACTIVE and deltas are supported keys.""" - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED m_application_status.return_value = (application_status, "") deltas = {"entitlement": {"directives": directives}} assert entitlement.process_contract_deltas({}, deltas, False) @@ -557,7 +561,7 @@ process_token, ): """Run livepatch calls setup when resourceToken changes.""" - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED m_application_status.return_value = (application_status, "") entitlement.process_contract_deltas({}, deltas, False) if any([process_directives, process_token]): @@ -644,7 +648,7 @@ apt_update_success, ): """Install snapd and canonical-livepatch snap when not on system.""" - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED m_app_status.return_value = application_status, "enabled" def fake_run_apt_update(): @@ -703,7 +707,7 @@ entitlement, ): """Install canonical-livepatch snap when not present on the system.""" - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED m_app_status.return_value = application_status, "enabled" assert entitlement.enable() assert ( @@ -744,7 +748,7 @@ entitlement, ): """Install canonical-livepatch snap when not present on the system.""" - m_app_status.return_value = status.ApplicationStatus.ENABLED, "enabled" + m_app_status.return_value = ApplicationStatus.ENABLED, "enabled" with mock.patch( M_PATH + "apt.get_installed_packages", return_value=[] ): @@ -785,7 +789,7 @@ entitlement, ): """Do not attempt to install livepatch snap when it is present.""" - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED m_app_status.return_value = application_status, "enabled" assert entitlement.enable() subp_calls = [ @@ -837,7 +841,7 @@ ): """Do not attempt to disable livepatch snap when it is inactive.""" - m_app_status.return_value = status.ApplicationStatus.DISABLED, "nope" + m_app_status.return_value = ApplicationStatus.DISABLED, "nope" assert entitlement.enable() subp_no_livepatch_disable = [ mock.call( @@ -885,7 +889,7 @@ cls_name ) ) as m_fips: - m_fips.return_value = (status.ApplicationStatus.ENABLED, "") + m_fips.return_value = (ApplicationStatus.ENABLED, "") result, reason = entitlement.enable() assert not result diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_repo.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_repo.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/entitlements/tests/test_repo.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/entitlements/tests/test_repo.py 2022-05-18 19:44:15.000000000 +0000 @@ -4,7 +4,12 @@ import mock import pytest -from uaclient import apt, exceptions, messages, status, util +from uaclient import apt, exceptions, messages, util +from uaclient.entitlements.entitlement_status import ( + ApplicabilityStatus, + ApplicationStatus, + UserFacingStatus, +) from uaclient.entitlements.repo import RepoEntitlement from uaclient.entitlements.tests.conftest import machine_token @@ -60,18 +65,18 @@ ): """When applicability_status is INAPPLICABLE, return INAPPLICABLE.""" platform_unsupported = copy.deepcopy(dict(PLATFORM_INFO_SUPPORTED)) - platform_unsupported["series"] = "trusty" - platform_unsupported["version"] = "14.04 LTS (Trusty Tahr)" + platform_unsupported["series"] = "example" + platform_unsupported["version"] = "01.01 LTS (Example Version)" m_platform_info.return_value = platform_unsupported applicability, details = entitlement.applicability_status() - assert status.ApplicabilityStatus.INAPPLICABLE == applicability + assert ApplicabilityStatus.INAPPLICABLE == applicability expected_details = ( - "Repo Test Class is not available for Ubuntu 14.04" - " LTS (Trusty Tahr)." + "Repo Test Class is not available for Ubuntu 01.01" + " LTS (Example Version)." ) assert expected_details == details.msg uf_status, _ = entitlement.user_facing_status() - assert status.UserFacingStatus.INAPPLICABLE == uf_status + assert UserFacingStatus.INAPPLICABLE == uf_status @mock.patch(M_PATH + "util.get_platform_info") def test_unavailable_on_unentitled(self, m_platform_info, entitlement): @@ -84,9 +89,9 @@ entitlement.cfg.write_cache("machine-token", no_entitlements) m_platform_info.return_value = dict(PLATFORM_INFO_SUPPORTED) applicability, _details = entitlement.applicability_status() - assert status.ApplicabilityStatus.APPLICABLE == applicability + assert ApplicabilityStatus.APPLICABLE == applicability uf_status, uf_details = entitlement.user_facing_status() - assert status.UserFacingStatus.UNAVAILABLE == uf_status + assert UserFacingStatus.UNAVAILABLE == uf_status assert "Repo Test Class is not entitled" == uf_details.msg @@ -118,7 +123,7 @@ entitled, ): """Disable the service on contract transitions to unentitled.""" - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED m_application_status.return_value = (application_status, "") assert entitlement.process_contract_deltas( {"entitlement": {"entitled": True}}, @@ -138,11 +143,11 @@ ): """Noop when service is inactive and not enableByDefault.""" m_application_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ) m_applicability_status.return_value = ( - status.ApplicabilityStatus.APPLICABLE, + ApplicabilityStatus.APPLICABLE, "", ) assert not entitlement.process_contract_deltas( @@ -166,11 +171,11 @@ ): """Update apt when inactive, enableByDefault and allow_enable.""" m_application_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ) m_applicability_status.return_value = ( - status.ApplicabilityStatus.APPLICABLE, + ApplicabilityStatus.APPLICABLE, "", ) assert entitlement.process_contract_deltas( @@ -196,11 +201,11 @@ ): """Log a message when inactive, enableByDefault and allow_enable.""" m_application_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ) m_applicability_status.return_value = ( - status.ApplicabilityStatus.APPLICABLE, + ApplicabilityStatus.APPLICABLE, "", ) assert entitlement.process_contract_deltas( @@ -237,7 +242,7 @@ ): """Update_apt_config and packages if active and not enableByDefault.""" m_check_apt_url_applied.return_value = False - application_status = status.ApplicationStatus.ENABLED + application_status = ApplicationStatus.ENABLED m_application_status.return_value = (application_status, "") deltas = { "entitlement": {"obligations": {"enableByDefault": False}}, @@ -266,9 +271,7 @@ "uaclient.entitlements.base.UAEntitlement.process_contract_deltas" ) @mock.patch("uaclient.config.UAConfig.read_cache") - @mock.patch( - M_PATH + "util.get_platform_info", return_value={"series": "trusty"} - ) + @mock.patch(M_PATH + "util.get_platform_info") @mock.patch(M_PATH + "apt.remove_auth_apt_repo") @mock.patch.object(RepoTestEntitlement, "setup_apt_config") @mock.patch.object(RepoTestEntitlement, "remove_apt_config") @@ -328,9 +331,7 @@ "uaclient.entitlements.base.UAEntitlement.process_contract_deltas" ) @mock.patch("uaclient.config.UAConfig.read_cache") - @mock.patch( - M_PATH + "util.get_platform_info", return_value={"series": "trusty"} - ) + @mock.patch(M_PATH + "util.get_platform_info") @mock.patch(M_PATH + "apt.remove_auth_apt_repo") @mock.patch.object(RepoTestEntitlement, "setup_apt_config") @mock.patch.object(RepoTestEntitlement, "remove_apt_config") @@ -392,10 +393,14 @@ ), ) @mock.patch.object( - RepoTestEntitlement, "_perform_enable", return_value=False + RepoTestEntitlement, + "_perform_enable", + return_value=False, ) @mock.patch.object( - RepoTestEntitlement, "can_enable", return_value=(True, None) + RepoTestEntitlement, + "can_enable", + return_value=(True, None), ) def test_enable_can_exit_on_pre_enable_messaging_hooks( self, @@ -860,7 +865,7 @@ """Calls apt.setup_apt_proxy()""" entitlement.setup_apt_config() assert [ - mock.call(http_proxy=None, https_proxy=None) + mock.call(http_proxy=None, https_proxy=None, proxy_scope=None) ] == m_setup_apt_proxy.call_args_list @mock.patch("uaclient.apt.setup_apt_proxy") @@ -1019,7 +1024,7 @@ application_status, explanation = entitlement.application_status() - assert status.ApplicationStatus.DISABLED == application_status + assert ApplicationStatus.DISABLED == application_status assert ( "Repo Test Class does not have an aptURL directive" == explanation.msg @@ -1033,9 +1038,9 @@ (500, "https://esm.ubuntu.com/ubuntu", True), ), ) - @mock.patch(M_PATH + "apt.run_apt_command") + @mock.patch(M_PATH + "apt.run_apt_cache_policy_command") def test_enabled_status_by_apt_policy( - self, m_run_apt_command, pin, policy_url, enabled, entitlement_factory + self, m_run_apt_policy, pin, policy_url, enabled, entitlement_factory ): """Report ENABLED when apt-policy lists specific aptURL and 500 pin.""" entitlement = entitlement_factory( @@ -1050,15 +1055,15 @@ " release v=18.04,o=UbuntuESMApps,...,n=bionic,l=UbuntuESMApps", " origin esm.ubuntu.com", ] - m_run_apt_command.return_value = "\n".join(policy_lines) + m_run_apt_policy.return_value = "\n".join(policy_lines) application_status, explanation = entitlement.application_status() if enabled: - expected_status = status.ApplicationStatus.ENABLED + expected_status = ApplicationStatus.ENABLED expected_explanation = "Repo Test Class is active" else: - expected_status = status.ApplicationStatus.DISABLED + expected_status = ApplicationStatus.DISABLED expected_explanation = "Repo Test Class is not configured" assert expected_status == application_status assert expected_explanation == explanation.msg diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/event_logger.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/event_logger.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/event_logger.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/event_logger.py 2022-05-18 19:44:15.000000000 +0000 @@ -8,12 +8,14 @@ import enum import json +import os import sys -from typing import Dict, List, Optional, Set # noqa: F401 +from typing import Any, Dict, List, Optional, Set, Union # noqa: F401 -from uaclient.status import format_machine_readable_output +from uaclient.defaults import CONFIG_FIELD_ENVVAR_ALLOWLIST JSON_SCHEMA_VERSION = "0.1" +EventFieldErrorType = Optional[Union[str, Dict[str, str]]] _event_logger = None @@ -40,10 +42,33 @@ YAML = object() +def format_machine_readable_output(status: Dict[str, Any]) -> Dict[str, Any]: + status["environment_vars"] = [ + {"name": name, "value": value} + for name, value in sorted(os.environ.items()) + if name.lower() in CONFIG_FIELD_ENVVAR_ALLOWLIST + or name.startswith("UA_FEATURES") + or name == "UA_CONFIG_FILE" + ] + + if not status.get("simulated"): + available_services = [ + service + for service in status.get("services", []) + if service.get("available", "yes") == "yes" + ] + status["services"] = available_services + + # We don't need the origin info in the json output + status.pop("origin", "") + + return status + + class EventLogger: def __init__(self): - self._error_events = [] # type: List[Dict[str, Optional[str]]] - self._warning_events = [] # type: List[Dict[str, Optional[str]]] + self._error_events = [] # type: List[Dict[str, EventFieldErrorType]] + self._warning_events = [] # type: List[Dict[str, EventFieldErrorType]] self._processed_services = set() # type: Set[str] self._failed_services = set() # type: Set[str] self._needs_reboot = False @@ -102,21 +127,25 @@ self, msg: str, service: Optional[str], - event_dict: List[Dict[str, Optional[str]]], + event_dict: List[Dict[str, EventFieldErrorType]], code: Optional[str] = None, event_type: Optional[str] = None, + additional_info: Optional[Dict[str, str]] = None, ): if event_type is None: event_type = "service" if service else "system" - event_dict.append( - { - "type": event_type, - "service": service, - "message": msg, - "message_code": code, - } - ) + event_entry = { + "type": event_type, + "service": service, + "message": msg, + "message_code": code, + } # type: Dict[str, EventFieldErrorType] + + if additional_info: + event_entry["additional_info"] = additional_info + + event_dict.append(event_entry) def error( self, @@ -124,6 +153,7 @@ error_code: Optional[str] = None, service: Optional[str] = None, error_type: Optional[str] = None, + additional_info: Optional[Dict[str, str]] = None, ): """ Store an error in the event logger. @@ -138,6 +168,7 @@ event_dict=self._error_events, code=error_code, event_type=error_type, + additional_info=additional_info, ) def warning(self, warning_msg: str, service: Optional[str] = None): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/exceptions.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/exceptions.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/exceptions.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/exceptions.py 2022-05-18 19:44:15.000000000 +0000 @@ -17,9 +17,15 @@ exit_code = 1 - def __init__(self, msg: str, msg_code: Optional[str] = None) -> None: + def __init__( + self, + msg: str, + msg_code: Optional[str] = None, + additional_info: Optional[Dict[str, str]] = None, + ) -> None: self.msg = msg self.msg_code = msg_code + self.additional_info = additional_info class APTInstallError(UserFacingError): @@ -98,9 +104,13 @@ class InvalidServiceToDisableError(UserFacingError): - def __init__(self, operation: str, name: str, service_msg: str) -> None: + def __init__( + self, operation: str, invalid_service: str, service_msg: str + ) -> None: msg = messages.INVALID_SERVICE_OP_FAILURE.format( - operation=operation, name=name, service_msg=service_msg + operation=operation, + invalid_service=invalid_service, + service_msg=service_msg, ) super().__init__(msg=msg.msg, msg_code=msg.name) @@ -162,6 +172,16 @@ super().__init__(msg=msg.msg, msg_code=msg.name) +class AttachError(UserFacingError): + """An exception to be raised when we detect a generic attach error.""" + + exit_code = 1 + + def __init__(self): + msg = messages.ATTACH_FAILURE + super().__init__(msg=msg.msg, msg_code=msg.name) + + class AttachInvalidConfigFileError(UserFacingError): def __init__(self, config_name: str, error: str) -> None: msg = messages.ATTACH_CONFIG_READ_ERROR.format( @@ -191,6 +211,7 @@ """ def __init__(self, lock_request: str, lock_holder: str, pid: int): + self.lock_holder = lock_holder msg = messages.LOCK_HELD_ERROR.format( lock_request=lock_request, lock_holder=lock_holder, pid=pid ) @@ -359,3 +380,19 @@ if details: return prefix + ": [" + self.url + "] " + ", ".join(details) return prefix + ": [" + self.url + "]" + + +class InPlaceUpgradeNotSupportedError(Exception): + pass + + +class IsProLicensePresentError(Exception): + pass + + +class CancelProLicensePolling(IsProLicensePresentError): + pass + + +class DelayProLicensePolling(IsProLicensePresentError): + pass diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/__init__.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/__init__.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/__init__.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/__init__.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,14 +0,0 @@ -from uaclient import config, util -from uaclient.clouds.identity import get_cloud_type - - -def enable_license_check_if_applicable(cfg: config.UAConfig): - series = util.get_platform_info()["series"] - if "gce" in get_cloud_type() and util.is_lts(series): - cfg.write_cache("marker-license-check", "") - - -def disable_license_check_if_applicable(cfg: config.UAConfig): - if cfg.cache_key_exists("marker-license-check"): - cfg.delete_cache_key("marker-license-check") - util.subp(["systemctl", "stop", "ua-license-check.timer"]) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/license_check.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/license_check.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/license_check.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/license_check.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,62 +0,0 @@ -""" -Try to auto-attach in a GCP instance. This should only work -if the instance has a new UA license attached to it -""" -import logging - -from uaclient import config, exceptions, jobs -from uaclient.cli import action_auto_attach -from uaclient.clouds.gcp import GCP_LICENSES, UAAutoAttachGCPInstance -from uaclient.clouds.identity import get_cloud_type -from uaclient.util import get_platform_info - -LOG = logging.getLogger("ua_lib.license_check.jobs.license_check") - - -def gcp_auto_attach(cfg: config.UAConfig) -> bool: - # We will not do anything in a non-GCP cloud - cloud_id, _ = get_cloud_type() - if not cloud_id or cloud_id != "gce": - # If we are not running on GCP cloud, we shouldn't run this - # job anymore - LOG.info("Disabling gcp_auto_attach job. Not running on GCP instance") - jobs.disable_license_check_if_applicable(cfg) - return False - - # If the instance is already attached we will not do anything. - # This implies that the user may have a new license attached to the - # instance, but we will not perfom the change through this job. - if cfg.is_attached: - LOG.info("Disabling gcp_auto_attach job. Already attached") - jobs.disable_license_check_if_applicable(cfg) - return False - - series = get_platform_info()["series"] - if series not in GCP_LICENSES: - LOG.info("Disabling gcp_auto_attach job. Not on LTS") - jobs.disable_license_check_if_applicable(cfg) - return False - - # Only try to auto_attach if the license is found in the metadata. - # If there is a problem finding the metadata, do not error out. - try: - licenses = UAAutoAttachGCPInstance().get_licenses_from_identity() - except Exception: - return False - - if GCP_LICENSES[series] in licenses: - try: - # This function already uses the assert lock decorator, - # which means that we don't need to make create another - # lock only for the job - action_auto_attach(args=None, cfg=cfg) - return True - except exceptions.NonAutoAttachImageError: - # If we get a NonAutoAttachImageError we know - # that the machine is not ready yet to perform an - # auto-attach operation (i.e. the license may not - # have been appended yet). If that happens, we will not - # error out. - pass - - return False diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/tests/test_gcp_auto_attach.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/tests/test_gcp_auto_attach.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/tests/test_gcp_auto_attach.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/tests/test_gcp_auto_attach.py 1970-01-01 00:00:00.000000000 +0000 @@ -1,144 +0,0 @@ -import logging - -import mock -import pytest - -from uaclient.clouds.gcp import UAAutoAttachGCPInstance -from uaclient.exceptions import NonAutoAttachImageError -from uaclient.jobs.license_check import gcp_auto_attach - - -@mock.patch( - "uaclient.jobs.license_check.GCP_LICENSES", - {"ubuntu-lts": "test-license-id"}, -) -class TestGCPAutoAttachJob: - @mock.patch("uaclient.jobs.license_check.get_cloud_type") - @mock.patch("uaclient.jobs.license_check.action_auto_attach") - def test_gcp_auto_attach_already_attached( - self, m_auto_attach, m_cloud_type, FakeConfig - ): - m_cloud_type.return_value = ("gce", None) - cfg = FakeConfig.for_attached_machine() - assert gcp_auto_attach(cfg) is False - assert m_auto_attach.call_count == 0 - - @pytest.mark.parametrize("caplog_text", [logging.DEBUG], indirect=True) - @pytest.mark.parametrize( - "cloud_type", (("gce"), ("azure"), ("aws"), (None)) - ) - @mock.patch("uaclient.jobs.license_check.get_cloud_type") - @mock.patch.object(UAAutoAttachGCPInstance, "get_licenses_from_identity") - @mock.patch("uaclient.jobs.license_check.get_platform_info") - @mock.patch("uaclient.jobs.license_check.action_auto_attach") - def test_gcp_auto_attach( - self, - m_auto_attach, - m_platform_info, - m_get_licenses, - m_cloud_type, - cloud_type, - caplog_text, - FakeConfig, - ): - m_cloud_type.return_value = (cloud_type, None) - m_get_licenses.return_value = ["test-license-id"] - m_platform_info.return_value = {"series": "ubuntu-lts"} - cfg = FakeConfig() - - m_auto_attach.return_value = 0 - return_value = gcp_auto_attach(cfg) - - if cloud_type != "gce": - assert m_auto_attach.call_count == 0 - assert ( - "Disabling gcp_auto_attach job. Not running on GCP instance" - ) in caplog_text() - assert return_value is False - - else: - assert m_auto_attach.call_count == 1 - assert return_value is True - - @mock.patch("uaclient.jobs.license_check.get_cloud_type") - @mock.patch.object(UAAutoAttachGCPInstance, "get_licenses_from_identity") - @mock.patch("uaclient.jobs.license_check.get_platform_info") - @mock.patch("uaclient.jobs.license_check.action_auto_attach") - def test_gcp_job_dont_fail_if_non_auto_attach_image_error_is_raised( - self, - m_auto_attach, - m_platform_info, - m_get_licenses, - m_cloud_type, - FakeConfig, - ): - m_cloud_type.return_value = ("gce", None) - m_get_licenses.return_value = ["test-license-id"] - m_platform_info.return_value = {"series": "ubuntu-lts"} - m_auto_attach.side_effect = NonAutoAttachImageError("error") - cfg = FakeConfig() - - assert gcp_auto_attach(cfg) is False - assert m_auto_attach.call_count == 1 - - @mock.patch("uaclient.jobs.license_check.get_cloud_type") - @mock.patch.object(UAAutoAttachGCPInstance, "get_licenses_from_identity") - @mock.patch("uaclient.jobs.license_check.get_platform_info") - @mock.patch("uaclient.jobs.license_check.action_auto_attach") - def test_gcp_job_dont_fail_if_licenses_fail( - self, - m_auto_attach, - m_platform_info, - m_get_licenses, - m_cloud_type, - FakeConfig, - ): - m_cloud_type.return_value = ("gce", None) - m_get_licenses.side_effect = TypeError("error") - m_platform_info.return_value = {"series": "ubuntu-lts"} - cfg = FakeConfig() - - assert gcp_auto_attach(cfg) is False - assert m_auto_attach.call_count == 0 - assert m_get_licenses.call_count == 1 - - @mock.patch("uaclient.jobs.license_check.get_cloud_type") - @mock.patch.object(UAAutoAttachGCPInstance, "get_licenses_from_identity") - @mock.patch("uaclient.jobs.license_check.get_platform_info") - @mock.patch("uaclient.jobs.license_check.action_auto_attach") - def test_gcp_auto_attach_license_not_present( - self, - m_auto_attach, - m_platform_info, - m_get_licenses, - m_cloud_type, - FakeConfig, - ): - m_cloud_type.return_value = ("gce", None) - m_get_licenses.return_value = ["unsupported-license"] - m_platform_info.return_value = {"series": "ubuntu-lts"} - cfg = FakeConfig() - - assert gcp_auto_attach(cfg) is False - assert m_auto_attach.call_count == 0 - assert m_get_licenses.call_count == 1 - - @mock.patch("uaclient.jobs.license_check.get_cloud_type") - @mock.patch.object(UAAutoAttachGCPInstance, "get_licenses_from_identity") - @mock.patch("uaclient.jobs.license_check.get_platform_info") - @mock.patch("uaclient.jobs.license_check.action_auto_attach") - def test_gcp_auto_attach_skips_non_lts( - self, - m_auto_attach, - m_platform_info, - m_get_licenses, - m_cloud_type, - FakeConfig, - ): - m_cloud_type.return_value = ("gce", None) - m_platform_info.return_value = {"series": "ubuntu-non-lts"} - cfg = FakeConfig() - - assert gcp_auto_attach(cfg) is False - assert m_auto_attach.call_count == 0 - assert m_get_licenses.call_count == 0 diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/update_messaging.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/update_messaging.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/update_messaging.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/update_messaging.py 2022-05-18 19:44:15.000000000 +0000 @@ -9,9 +9,11 @@ import enum import logging import os +from os.path import exists from typing import List, Tuple from uaclient import config, defaults, entitlements, util +from uaclient.entitlements.entitlement_status import ApplicationStatus from uaclient.messages import ( ANNOUNCE_ESM_TMPL, CONTRACT_EXPIRED_APT_NO_PKGS_TMPL, @@ -23,7 +25,6 @@ DISABLED_MOTD_NO_PKGS_TMPL, UBUNTU_NO_WARRANTY, ) -from uaclient.status import ApplicationStatus @enum.unique @@ -52,8 +53,13 @@ UBUNTU_NO_WARRANTY = "ubuntu-no-warranty" +UPDATE_NOTIFIER_MOTD_SCRIPT = ( + "/usr/lib/update-notifier/update-motd-updates-available" +) + + def get_contract_expiry_status( - cfg: config.UAConfig + cfg: config.UAConfig, ) -> Tuple[ContractExpiryStatus, int]: """Return a tuple [ContractExpiryStatus, num_days]""" if not cfg.is_attached: @@ -206,13 +212,13 @@ no_warranty_file = ExternalMessage.UBUNTU_NO_WARRANTY.value msg_dir = os.path.join(cfg.data_dir, "messages") - apps_cls = entitlements.entitlement_factory("esm-apps") + apps_cls = entitlements.entitlement_factory(cfg=cfg, name="esm-apps") apps_inst = apps_cls(cfg) config_allow_beta = util.is_config_value_true( config=cfg.cfg, path_to_value="features.allow_beta" ) apps_valid = bool(config_allow_beta or not apps_cls.is_beta) - infra_cls = entitlements.entitlement_factory("esm-infra") + infra_cls = entitlements.entitlement_factory(cfg=cfg, name="esm-infra") infra_inst = infra_cls(cfg) expiry_status, remaining_days = get_contract_expiry_status(cfg) @@ -235,7 +241,7 @@ _write_template_or_remove( no_warranty_msg, os.path.join(msg_dir, no_warranty_file) ) - if not msg_esm_infra and series != "trusty": + if not msg_esm_infra: # write_apt_and_motd_templates is only called if util.is_lts(series) msg_esm_apps = apps_valid @@ -287,12 +293,12 @@ def write_esm_announcement_message(cfg: config.UAConfig, series: str) -> None: """Write human-readable messages if ESM is offered on this LTS release. - Do not write ESM announcements on trusty, esm-apps is enable or beta. + Do not write ESM announcements if esm-apps is enabled or beta. :param cfg: UAConfig instance for this environment. :param series: string of Ubuntu release series: 'xenial'. """ - apps_cls = entitlements.entitlement_factory("esm-apps") + apps_cls = entitlements.entitlement_factory(cfg=cfg, name="esm-apps") apps_inst = apps_cls(cfg) enabled_status = ApplicationStatus.ENABLED apps_not_enabled = apps_inst.application_status()[0] != enabled_status @@ -311,7 +317,7 @@ ) else: ua_esm_url = defaults.BASE_ESM_URL - if all([series != "trusty", apps_not_beta, apps_not_enabled]): + if apps_not_beta and apps_not_enabled: util.write_file( esm_news_file, "\n" + ANNOUNCE_ESM_TMPL.format(url=ua_esm_url) ) @@ -352,3 +358,18 @@ # Now that we've setup/cleanedup templates render them with apt-hook util.subp(["/usr/lib/ubuntu-advantage/apt-esm-hook", "process-templates"]) return True + + +def refresh_motd(): + # If update-notifier is present, we might as well update + # the package updates count related to MOTD + if exists(UPDATE_NOTIFIER_MOTD_SCRIPT): + # If this command fails, we shouldn't break the entire command, + # since this command should already be triggered by + # update-notifier apt hooks + try: + util.subp([UPDATE_NOTIFIER_MOTD_SCRIPT, "--force"]) + except Exception as exc: + logging.exception(exc) + + util.subp(["sudo", "systemctl", "restart", "motd-news.service"]) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/update_state.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/update_state.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/jobs/update_state.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/jobs/update_state.py 2022-05-18 19:44:15.000000000 +0000 @@ -3,9 +3,10 @@ """ from uaclient.config import UAConfig +from uaclient.status import status def update_status(cfg: UAConfig) -> bool: if cfg.is_attached: - cfg.status() + status(cfg=cfg) return True diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/messages.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/messages.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/messages.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/messages.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,3 +1,5 @@ +from typing import Dict, Optional # noqa: F401 + from uaclient.defaults import BASE_UA_URL, DOCUMENTATION_URL @@ -5,6 +7,10 @@ def __init__(self, name: str, msg: str): self.name = name self.msg = msg + # we should use this field whenever we want to provide + # extra information to the message. This is specially + # useful if the message represents an error. + self.additional_info = None # type: Optional[Dict[str, str]] class FormattedNamedMessage(NamedMessage): @@ -140,6 +146,17 @@ REFRESH_CONTRACT_FAILURE = "Unable to refresh your subscription" REFRESH_CONFIG_SUCCESS = "Successfully processed your ua configuration." REFRESH_CONFIG_FAILURE = "Unable to process uaclient.conf" +REFRESH_MESSAGES_SUCCESS = ( + "Successfully updated UA related APT and MOTD messages." +) +REFRESH_MESSAGES_FAILURE = "Unable to update UA related APT and MOTD messages." + +UPDATE_CHECK_CONTRACT_FAILURE = ( + """Failed to check for change in machine contract. Reason: {reason}""" +) +UPDATE_MOTD_NO_REQUIRED_CMD = ( + "Required command to update MOTD messages not found: {cmd}." +) INCOMPATIBLE_SERVICE = """\ {service_being_enabled} cannot be enabled with {incompatible_service}. @@ -231,10 +248,13 @@ * Autogenerated by ubuntu-advantage-tools * Do not edit this file directly * - * To change what ubuntu-advantage-tools sets, run one of the following: - * Substitute "apt_https_proxy" for "apt_http_proxy" as necessary. - * sudo ua config set apt_http_proxy= - * sudo ua config unset apt_http_proxy + * To change what ubuntu-advantage-tools sets, use the `ua config set` + * or the `ua config unset` commands to set/unset either: + * global_apt_http_proxy and global_apt_https_proxy + * for a global apt proxy + * or + * ua_apt_http_proxy and ua_apt_https_proxy + * for an apt proxy that only applies to UA related repos. */ """ @@ -274,15 +294,29 @@ + BASE_UA_URL, ) -ENABLE_FAILURE_UNATTACHED = FormattedNamedMessage( - "enable-failure-unattached", +VALID_SERVICE_FAILURE_UNATTACHED = FormattedNamedMessage( + "valid-service-failure-unattached", """\ -To use '{name}' you need an Ubuntu Advantage subscription +To use '{valid_service}' you need an Ubuntu Advantage subscription Personal and community subscriptions are available at no charge See """ + BASE_UA_URL, ) +INVALID_SERVICE_OP_FAILURE = FormattedNamedMessage( + "invalid-service-or-failure", + """\ +Cannot {operation} unknown service '{invalid_service}'. +{service_msg}""", +) + +MIXED_SERVICES_FAILURE_UNATTACHED = FormattedNamedMessage( + "mixed-services-failure-unattached", + INVALID_SERVICE_OP_FAILURE.tmpl_msg + + "\n" + + VALID_SERVICE_FAILURE_UNATTACHED.tmpl_msg, +) + FAILED_DISABLING_DEPENDENT_SERVICE = FormattedNamedMessage( "failed-disabling-dependent-service", """\ @@ -582,13 +616,6 @@ "Could not determine contract delta service type {orig} {new}", ) -INVALID_SERVICE_OP_FAILURE = FormattedNamedMessage( - "invalid-service-or-failure", - """\ -Cannot {operation} unknown service '{name}'. -{service_msg}""", -) - LOCK_HELD = FormattedNamedMessage( "lock-held", """Operation in progress: {lock_holder} (pid:{pid})""" ) @@ -698,8 +725,10 @@ The real-time kernel is a beta version of the 22.04 Ubuntu kernel with the PREEMPT_RT patchset integrated for x86_64 and ARM64. -{bold}You will not be able to revert to your original kernel after enabling\ - real-time.{end_bold} +{bold}\ +This will change your kernel. You will need to manually configure grub to +revert back to your original kernel after enabling real-time.\ +{end_bold} Do you want to continue? [ default = Yes ]: (Y/n) """.format( bold=TxtColor.BOLD, end_bold=TxtColor.ENDC @@ -719,3 +748,101 @@ LOG_CONNECTIVITY_ERROR_WITH_URL_TMPL = ( CONNECTIVITY_ERROR.msg + " Failed to access URL: {url}. {error}" ) + +SETTING_SERVICE_PROXY_SCOPE = "Setting {scope} APT proxy" +WARNING_APT_PROXY_SETUP = """\ +Warning: apt_{protocol_type}_proxy has been renamed to global_apt_{protocol_type}_proxy.""" # noqa: E501 +WARNING_APT_PROXY_OVERWRITE = """\ +Warning: Setting the {current_proxy} proxy will overwrite the {previous_proxy} +proxy previously set via `ua config`. +""" +WARNING_DEPRECATED_APT_HTTP = """\ +Using deprecated "apt_http_proxy" config field. +Please migrate to using "global_apt_http_proxy" +""" +WARNING_DEPRECATED_APT_HTTPS = """\ +Using deprecated "apt_https_proxy" config field. +Please migrate to using "global_apt_https_proxy" +""" + +ERROR_PROXY_CONFIGURATION = """\ +Error: Setting global apt proxy and ua scoped apt proxy +at the same time is unsupported. +Cancelling config process operation. +""" + +AVAILABILITY_FROM_UNKNOWN_SERVICE = """\ +Ignoring availability of unknown service {service} from contract server +""" + +NOTICE_FIPS_MANUAL_DISABLE_URL = """\ +FIPS kernel is running in a disabled state. + To manually remove fips kernel: https://discourse.ubuntu.com/t/20738 +""" +NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD = """\ +Warning: FIPS kernel is not optimized for your specific cloud. +To fix it, run the following commands: + + 1. sudo ua disable fips + 2. sudo apt-get remove ubuntu-fips + 3. sudo ua enable fips --assume-yes + 4. sudo reboot +""" +NOTICE_DAEMON_AUTO_ATTACH_LOCK_HELD = """\ +Detected an Ubuntu Pro license but failed to auto attach because +"{operation}" was in progress. +Please run `ua auto-attach` to upgrade to Pro. +""" +NOTICE_DAEMON_AUTO_ATTACH_FAILED = """\ +Detected an Ubuntu Pro license but failed to auto attach. +Please run `ua auto-attach` to upgrade to Pro. +If that fails then please contact support. +""" + +PROMPT_YES_NO = """Are you sure? (y/N) """ +PROMPT_FIPS_PRE_ENABLE = ( + """\ +This will install the FIPS packages. The Livepatch service will be unavailable. +Warning: This action can take some time and cannot be undone. +""" + + PROMPT_YES_NO +) +PROMPT_FIPS_UPDATES_PRE_ENABLE = ( + """\ +This will install the FIPS packages including security updates. +Warning: This action can take some time and cannot be undone. +""" + + PROMPT_YES_NO +) +PROMPT_FIPS_CONTAINER_PRE_ENABLE = ( + """\ +Warning: Enabling {title} in a container. + This will install the FIPS packages but not the kernel. + This container must run on a host with {title} enabled to be + compliant. +Warning: This action can take some time and cannot be undone. +""" + + PROMPT_YES_NO +) + +PROMPT_FIPS_PRE_DISABLE = ( + """\ +This will disable the FIPS entitlement but the FIPS packages will remain installed. +""" # noqa + + PROMPT_YES_NO +) + +PROMPT_ENTER_TOKEN = """\ +Enter your token (from {}) to attach this system:""".format( + BASE_UA_URL +) +PROMPT_EXPIRED_ENTER_TOKEN = """\ +Enter your new token to renew UA subscription on this system:""" +PROMPT_UA_SUBSCRIPTION_URL = """\ +Open a browser to: {}/subscribe""".format( + BASE_UA_URL +) + +NOTICE_REFRESH_CONTRACT_WARNING = """\ +A change has been detected in your contract. +Please run `sudo ua refresh`.""" diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/security.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/security.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/security.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/security.py 2022-05-18 19:44:15.000000000 +0000 @@ -7,7 +7,7 @@ from datetime import datetime from typing import Any, Dict, List, NamedTuple, Optional, Set, Tuple -from uaclient import apt, exceptions, messages, serviceclient, status, util +from uaclient import apt, exceptions, messages, serviceclient, util from uaclient.clouds.identity import ( CLOUD_TYPE_TO_TITLE, PRO_CLOUDS, @@ -16,6 +16,11 @@ from uaclient.config import UAConfig from uaclient.defaults import BASE_UA_URL, PRINT_WRAP_WIDTH from uaclient.entitlements import entitlement_factory +from uaclient.entitlements.entitlement_status import ( + ApplicabilityStatus, + UserFacingStatus, +) +from uaclient.status import colorize_commands CVE_OR_USN_REGEX = ( r"((CVE|cve)-\d{4}-\d{4,7}$|(USN|usn|LSN|lsn)-\d{1,5}-\d{1,2}$)" @@ -86,6 +91,7 @@ headers=headers, method=method, query_params=query_params, + potentially_sensitive=False, ) def get_cves( @@ -444,11 +450,7 @@ The dict keys will be source package name: "krb5". The value will be a dict with keys binary_pkg and version. """ - series = util.get_platform_info()["series"] - if series == "trusty": - status_field = "${Status}" - else: - status_field = "${db:Status-Status}" + status_field = "${db:Status-Status}" out, _err = util.subp( [ "dpkg-query", @@ -801,7 +803,7 @@ elif pocket == UA_APPS_POCKET: service_to_check = "esm-apps" - ent_cls = entitlement_factory(service_to_check) + ent_cls = entitlement_factory(cfg=cfg, name=service_to_check) return ent_cls(cfg) if ent_cls else None @@ -813,7 +815,7 @@ # If the service is already enabled, we proceed with the fix # even if the service is a beta stage. - if ent_status == status.UserFacingStatus.ACTIVE: + if ent_status == UserFacingStatus.ACTIVE: return False return not ent.valid_service @@ -1055,16 +1057,18 @@ from uaclient import cli - print(status.colorize_commands([["ua", "attach", token]])) - return bool( - 0 - == cli.action_attach( + print(colorize_commands([["ua", "attach", token]])) + try: + ret_code = cli.action_attach( argparse.Namespace( token=token, auto_enable=True, format="cli", attach_config=None ), cfg, ) - ) + return ret_code == 0 + except exceptions.UserFacingError as err: + print(err.msg) + return False def _prompt_for_attach(cfg: UAConfig) -> bool: @@ -1081,11 +1085,11 @@ if choice == "c": return False if choice == "s": - print(status.PROMPT_UA_SUBSCRIPTION_URL) + print(messages.PROMPT_UA_SUBSCRIPTION_URL) # TODO(GH: #1413: magic subscription attach) input("Hit [Enter] when subscription is complete.") if choice in ("a", "s"): - print(status.PROMPT_ENTER_TOKEN) + print(messages.PROMPT_ENTER_TOKEN) token = input("> ") return _run_ua_attach(cfg, token) @@ -1108,12 +1112,15 @@ ) if choice == "e": - print(status.colorize_commands([["ua", "enable", service]])) + print(colorize_commands([["ua", "enable", service]])) return bool( 0 == cli.action_enable( argparse.Namespace( - service=[service], assume_yes=True, beta=False + service=[service], + assume_yes=True, + beta=False, + format="cli", ), cfg, ) @@ -1131,11 +1138,11 @@ if ent: ent_status, _ = ent.user_facing_status() - if ent_status == status.UserFacingStatus.ACTIVE: + if ent_status == UserFacingStatus.ACTIVE: return True applicability_status, _ = ent.applicability_status() - if applicability_status == status.ApplicabilityStatus.APPLICABLE: + if applicability_status == ApplicabilityStatus.APPLICABLE: if _prompt_for_enable(cfg, ent.name): return True else: @@ -1172,10 +1179,12 @@ valid_choices=["r", "c"], ) if choice == "r": - print(status.PROMPT_EXPIRED_ENTER_TOKEN) + print(messages.PROMPT_EXPIRED_ENTER_TOKEN) token = input("> ") - print(status.colorize_commands([["ua", "detach"]])) - cli.action_detach(argparse.Namespace(assume_yes=True), cfg) + print(colorize_commands([["ua", "detach"]])) + cli.action_detach( + argparse.Namespace(assume_yes=True, format="cli"), cfg + ) return _run_ua_attach(cfg, token) return False @@ -1225,7 +1234,7 @@ return False print( - status.colorize_commands( + colorize_commands( [ ["apt", "update", "&&"] + ["apt", "install", "--only-upgrade", "-y"] @@ -1233,9 +1242,7 @@ ] ) ) - apt.run_apt_command( - cmd=["apt-get", "update"], error_msg=messages.APT_UPDATE_FAILED.msg - ) + apt.run_apt_update_command() apt.run_apt_command( cmd=["apt-get", "install", "--only-upgrade", "-y"] + upgrade_packages, error_msg=messages.APT_INSTALL_FAILED.msg, diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/security_status.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/security_status.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/security_status.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/security_status.py 2022-05-18 19:44:15.000000000 +0000 @@ -6,20 +6,18 @@ from apt import package as apt_package from uaclient.config import UAConfig +from uaclient.status import status from uaclient.util import get_platform_info series = get_platform_info()["series"] ESM_SERVICES = ("esm-infra", "esm-apps") -SERVICE_TO_ORIGIN_INFORMATION = { - "standard-security": ("Ubuntu", "{}-security".format(series)), - "esm-apps": ("UbuntuESMApps", "{}-apps-security".format(series)), - "esm-infra": ("UbuntuESM", "{}-infra-security".format(series)), -} ORIGIN_INFORMATION_TO_SERVICE = { - v: k for k, v in SERVICE_TO_ORIGIN_INFORMATION.items() + ("Ubuntu", "{}-security".format(series)): "standard-security", + ("UbuntuESMApps", "{}-apps-security".format(series)): "esm-apps", + ("UbuntuESM", "{}-infra-security".format(series)): "esm-infra", } @@ -31,18 +29,35 @@ UNAVAILABLE = "upgrade_unavailable" -def list_esm_for_package(package: apt_package.Package) -> List[str]: - esm_services = [] - for origin in package.installed.origins: - if (origin.origin, origin.archive) == SERVICE_TO_ORIGIN_INFORMATION[ - "esm-infra" - ]: - esm_services.append("esm-infra") - if (origin.origin, origin.archive) == SERVICE_TO_ORIGIN_INFORMATION[ - "esm-apps" - ]: - esm_services.append("esm-apps") - return esm_services +def get_origin_for_package(package: apt_package.Package) -> str: + """ + Returns the origin for a package installed in the system. + + Technically speaking, packages don't have origins - their versions do. + We check the available versions (installed, candidate) to determine the + most reasonable origin for the package. + """ + available_origins = package.installed.origins + + # If the installed version for a package has a single origin, it means that + # only the local dpkg reference is there. Then, we check if there is a + # candidate version. No candidate means we don't know anything about the + # package. Otherwise we check for the origins of the candidate version. + if len(available_origins) == 1: + if package.installed == package.candidate: + return "unknown" + available_origins = package.candidate.origins + + for origin in available_origins: + service = ORIGIN_INFORMATION_TO_SERVICE.get( + (origin.origin, origin.archive), "" + ) + if service in ESM_SERVICES: + return service + if origin.origin == "Ubuntu": + return origin.component + + return "third-party" def get_service_name(origins: List[apt_package.Origin]) -> Tuple[str, str]: @@ -74,8 +89,8 @@ def filter_security_updates( - packages: List[apt_package.Package] -) -> List[apt_package.Package]: + packages: List[apt_package.Package], +) -> List[apt_package.Version]: """Filters a list of packages looking for available security updates. Checks if the package has a greater version available, and if the origin of @@ -101,10 +116,10 @@ "entitled_services": [], } # type: Dict[str, Any] - status = cfg.status(show_beta=True) - if status["attached"]: + status_dict = status(cfg=cfg, show_beta=True) + if status_dict["attached"]: ua_info["attached"] = True - for service in status["services"]: + for service in status_dict["services"]: if service["name"] in ESM_SERVICES: if service["entitled"] == "yes": ua_info["entitled_services"].append(service["name"]) @@ -136,9 +151,8 @@ update_count = defaultdict(int) # type: DefaultDict[str, int] for package in installed_packages: - esm_services = list_esm_for_package(package) - for service in esm_services: - package_count[service] += 1 + package_origin = get_origin_for_package(package) + package_count[package_origin] += 1 security_upgradable_versions = filter_security_updates(installed_packages) @@ -156,8 +170,15 @@ } ) + summary["num_main_packages"] = package_count["main"] + summary["num_restricted_packages"] = package_count["restricted"] + summary["num_universe_packages"] = package_count["universe"] + summary["num_multiverse_packages"] = package_count["multiverse"] + summary["num_third_party_packages"] = package_count["third-party"] + summary["num_unknown_packages"] = package_count["unknown"] summary["num_esm_infra_packages"] = package_count["esm-infra"] summary["num_esm_apps_packages"] = package_count["esm-apps"] + summary["num_esm_infra_updates"] = update_count["esm-infra"] summary["num_esm_apps_updates"] = update_count["esm-apps"] summary["num_standard_security_updates"] = update_count[ diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/serviceclient.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/serviceclient.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/serviceclient.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/serviceclient.py 2022-05-18 19:44:15.000000000 +0000 @@ -42,7 +42,13 @@ } def request_url( - self, path, data=None, headers=None, method=None, query_params=None + self, + path, + data=None, + headers=None, + method=None, + query_params=None, + potentially_sensitive: bool = True, ): path = path.lstrip("/") if not headers: @@ -66,13 +72,14 @@ headers=headers, method=method, timeout=self.url_timeout, + potentially_sensitive=potentially_sensitive, ) except error.URLError as e: body = None if hasattr(e, "body"): - body = e.body + body = e.body # type: ignore elif hasattr(e, "read"): - body = e.read().decode("utf-8") + body = e.read().decode("utf-8") # type: ignore if body: try: error_details = json.loads( diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/status.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/status.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/status.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/status.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,138 +1,27 @@ -import enum +import copy +import logging import os import sys import textwrap -from typing import Any, Dict, List, Optional, Tuple - -from uaclient.defaults import ( - BASE_UA_URL, - CONFIG_FIELD_ENVVAR_ALLOWLIST, - PRINT_WRAP_WIDTH, +from collections import OrderedDict +from datetime import datetime, timezone +from typing import Any, Dict, List, Optional, Tuple, cast + +from uaclient import event_logger, exceptions, messages, util, version +from uaclient.config import UAConfig +from uaclient.contract import get_available_resources, get_contract_information +from uaclient.defaults import ATTACH_FAIL_DATE_FORMAT, PRINT_WRAP_WIDTH +from uaclient.entitlements import entitlement_factory +from uaclient.entitlements.entitlement_status import ( + ContractStatus, + UserFacingAvailability, + UserFacingConfigStatus, + UserFacingStatus, ) -from uaclient.messages import UNATTACHED, NamedMessage, TxtColor - - -@enum.unique -class ApplicationStatus(enum.Enum): - """ - An enum to represent the current application status of an entitlement - """ - - ENABLED = object() - DISABLED = object() - - -@enum.unique -class ContractStatus(enum.Enum): - """ - An enum to represent whether a user is entitled to an entitlement - - (The value of each member is the string that will be used in status - output.) - """ - - ENTITLED = "yes" - UNENTITLED = "no" - - -@enum.unique -class ApplicabilityStatus(enum.Enum): - """ - An enum to represent whether an entitlement could apply to this machine - """ - - APPLICABLE = object() - INAPPLICABLE = object() - - -@enum.unique -class UserFacingAvailability(enum.Enum): - """ - An enum representing whether a service could be available for a machine. - - 'Availability' means whether a service is available to machines with this - architecture, series and kernel. Whether a contract is entitled to use - the specific service is determined by the contract level. - - This enum should only be used in display code, it should not be used in - business logic. - """ - - AVAILABLE = "yes" - UNAVAILABLE = "no" - - -@enum.unique -class UserFacingConfigStatus(enum.Enum): - """ - An enum representing the user-visible config status of UA system. - - This enum will be used in display code and will be written to status.json - """ +from uaclient.messages import TxtColor - INACTIVE = "inactive" # No UA config commands/daemons - ACTIVE = "active" # UA command is running - REBOOTREQUIRED = "reboot-required" # System Reboot required - - -@enum.unique -class UserFacingStatus(enum.Enum): - """ - An enum representing the states we will display in status output. - - This enum should only be used in display code, it should not be used in - business logic. - """ - - ACTIVE = "enabled" - INACTIVE = "disabled" - INAPPLICABLE = "n/a" - UNAVAILABLE = "—" - - -@enum.unique -class CanEnableFailureReason(enum.Enum): - """ - An enum representing the reasons an entitlement can't be enabled. - """ - - NOT_ENTITLED = object() - ALREADY_ENABLED = object() - INAPPLICABLE = object() - IS_BETA = object() - INCOMPATIBLE_SERVICE = object() - INACTIVE_REQUIRED_SERVICES = object() - - -class CanEnableFailure: - def __init__( - self, - reason: CanEnableFailureReason, - message: Optional[NamedMessage] = None, - ) -> None: - self.reason = reason - self.message = message - - -@enum.unique -class CanDisableFailureReason(enum.Enum): - """ - An enum representing the reasons an entitlement can't be disabled. - """ - - ALREADY_DISABLED = object() - ACTIVE_DEPENDENT_SERVICES = object() - NOT_FOUND_DEPENDENT_SERVICE = object() - - -class CanDisableFailure: - def __init__( - self, - reason: CanDisableFailureReason, - message: Optional[NamedMessage] = None, - ) -> None: - self.reason = reason - self.message = message +event = event_logger.get_event_logger() +LOG = logging.getLogger(__name__) ESSENTIAL = "essential" @@ -168,62 +57,6 @@ ADVANCED: TxtColor.OKGREEN + ADVANCED + TxtColor.ENDC, } -PROMPT_YES_NO = """Are you sure? (y/N) """ -NOTICE_FIPS_MANUAL_DISABLE_URL = """\ -FIPS kernel is running in a disabled state. - To manually remove fips kernel: https://discourse.ubuntu.com/t/20738 -""" -NOTICE_WRONG_FIPS_METAPACKAGE_ON_CLOUD = """\ -Warning: FIPS kernel is not optimized for your specific cloud. -To fix it, run the following commands: - - 1. sudo ua disable fips - 2. sudo apt-get remove ubuntu-fips - 3. sudo ua enable fips --assume-yes - 4. sudo reboot -""" -PROMPT_FIPS_PRE_ENABLE = ( - """\ -This will install the FIPS packages. The Livepatch service will be unavailable. -Warning: This action can take some time and cannot be undone. -""" - + PROMPT_YES_NO -) -PROMPT_FIPS_UPDATES_PRE_ENABLE = ( - """\ -This will install the FIPS packages including security updates. -Warning: This action can take some time and cannot be undone. -""" - + PROMPT_YES_NO -) -PROMPT_FIPS_CONTAINER_PRE_ENABLE = ( - """\ -Warning: Enabling {title} in a container. - This will install the FIPS packages but not the kernel. - This container must run on a host with {title} enabled to be - compliant. -Warning: This action can take some time and cannot be undone. -""" - + PROMPT_YES_NO -) - -PROMPT_FIPS_PRE_DISABLE = ( - """\ -This will disable the FIPS entitlement but the FIPS packages will remain installed. -""" # noqa - + PROMPT_YES_NO -) - -PROMPT_ENTER_TOKEN = """\ -Enter your token (from {}) to attach this system:""".format( - BASE_UA_URL -) -PROMPT_EXPIRED_ENTER_TOKEN = """\ -Enter your new token to renew UA subscription on this system:""" -PROMPT_UA_SUBSCRIPTION_URL = """\ -Open a browser to: {}/subscribe""".format( - BASE_UA_URL -) STATUS_UNATTACHED_TMPL = "{name: <17}{available: <11}{description}" @@ -237,6 +70,428 @@ # that factor into formats len() calculations STATUS_TMPL = "{name: <17}{entitled: <19}{status: <19}{description}" +DEFAULT_STATUS = { + "_doc": "Content provided in json response is currently considered" + " Experimental and may change", + "_schema_version": "0.1", + "version": version.get_version(), + "machine_id": None, + "attached": False, + "effective": None, + "expires": None, # TODO Will this break something? + "origin": None, + "services": [], + "execution_status": UserFacingConfigStatus.INACTIVE.value, + "execution_details": messages.NO_ACTIVE_OPERATIONS, + "notices": [], + "contract": { + "id": "", + "name": "", + "created_at": "", + "products": [], + "tech_support_level": UserFacingStatus.INAPPLICABLE.value, + }, + "account": { + "name": "", + "id": "", + "created_at": "", + "external_account_ids": [], + }, + "simulated": False, +} # type: Dict[str, Any] + + +def _attached_service_status(ent, inapplicable_resources) -> Dict[str, Any]: + status_details = "" + description_override = None + contract_status = ent.contract_status() + if contract_status == ContractStatus.UNENTITLED: + ent_status = UserFacingStatus.UNAVAILABLE + else: + if ent.name in inapplicable_resources: + ent_status = UserFacingStatus.INAPPLICABLE + description_override = inapplicable_resources[ent.name] + else: + ent_status, details = ent.user_facing_status() + if details: + status_details = details.msg + + blocked_by = [ + { + "name": service.entitlement.name, + "reason_code": service.named_msg.name, + "reason": service.named_msg.msg, + } + for service in ent.blocking_incompatible_services() + ] + + return { + "name": ent.presentation_name, + "description": ent.description, + "entitled": contract_status.value, + "status": ent_status.value, + "status_details": status_details, + "description_override": description_override, + "available": "yes" if ent.name not in inapplicable_resources else "no", + "blocked_by": blocked_by, + } + + +def _attached_status(cfg) -> Dict[str, Any]: + """Return configuration of attached status as a dictionary.""" + + cfg.remove_notice( + "", + messages.NOTICE_DAEMON_AUTO_ATTACH_LOCK_HELD.format(operation=".*"), + ) + cfg.remove_notice("", messages.NOTICE_DAEMON_AUTO_ATTACH_FAILED) + + response = copy.deepcopy(DEFAULT_STATUS) + machineTokenInfo = cfg.machine_token["machineTokenInfo"] + contractInfo = machineTokenInfo["contractInfo"] + tech_support_level = UserFacingStatus.INAPPLICABLE.value + response.update( + { + "version": version.get_version(features=cfg.features), + "machine_id": machineTokenInfo["machineId"], + "attached": True, + "origin": contractInfo.get("origin"), + "notices": cfg.read_cache("notices") or [], + "contract": { + "id": contractInfo["id"], + "name": contractInfo["name"], + "created_at": contractInfo.get("createdAt", ""), + "products": contractInfo.get("products", []), + "tech_support_level": tech_support_level, + }, + "account": { + "name": cfg.accounts[0]["name"], + "id": cfg.accounts[0]["id"], + "created_at": cfg.accounts[0].get("createdAt", ""), + "external_account_ids": cfg.accounts[0].get( + "externalAccountIDs", [] + ), + }, + } + ) + if contractInfo.get("effectiveTo"): + response["expires"] = cfg.contract_expiry_datetime + if contractInfo.get("effectiveFrom"): + response["effective"] = contractInfo["effectiveFrom"] + + resources = cfg.machine_token.get("availableResources") + if not resources: + resources = get_available_resources(cfg) + + inapplicable_resources = { + resource["name"]: resource.get("description") + for resource in sorted(resources, key=lambda x: x.get("name", "")) + if not resource.get("available") + } + + for resource in resources: + try: + ent_cls = entitlement_factory( + cfg=cfg, name=resource.get("name", "") + ) + except exceptions.EntitlementNotFoundError: + continue + ent = ent_cls(cfg) + response["services"].append( + _attached_service_status(ent, inapplicable_resources) + ) + response["services"].sort(key=lambda x: x.get("name", "")) + + support = cfg.entitlements.get("support", {}).get("entitlement") + if support: + supportLevel = support.get("affordances", {}).get("supportLevel") + if supportLevel: + response["contract"]["tech_support_level"] = supportLevel + return response + + +def _unattached_status(cfg: UAConfig) -> Dict[str, Any]: + """Return unattached status as a dict.""" + + response = copy.deepcopy(DEFAULT_STATUS) + response["version"] = version.get_version(features=cfg.features) + + resources = get_available_resources(cfg) + for resource in resources: + if resource.get("available"): + available = UserFacingAvailability.AVAILABLE.value + else: + available = UserFacingAvailability.UNAVAILABLE.value + try: + ent_cls = entitlement_factory( + cfg=cfg, name=resource.get("name", "") + ) + except exceptions.EntitlementNotFoundError: + LOG.debug( + messages.AVAILABILITY_FROM_UNKNOWN_SERVICE.format( + service=resource.get("name", "without a 'name' key") + ) + ) + continue + + response["services"].append( + { + "name": resource.get("presentedAs", resource["name"]), + "description": ent_cls.description, + "available": available, + } + ) + response["services"].sort(key=lambda x: x.get("name", "")) + + return response + + +def _handle_beta_resources(cfg, show_beta, response) -> Dict[str, Any]: + """Remove beta services from response dict if needed""" + config_allow_beta = util.is_config_value_true( + config=cfg.cfg, path_to_value="features.allow_beta" + ) + show_beta |= config_allow_beta + if show_beta: + return response + + new_response = copy.deepcopy(response) + + released_resources = [] + for resource in new_response.get("services", {}): + resource_name = resource["name"] + try: + ent_cls = entitlement_factory(cfg=cfg, name=resource_name) + except exceptions.EntitlementNotFoundError: + """ + Here we cannot know the status of a service, + since it is not listed as a valid entitlement. + Therefore, we keep this service in the list, since + we cannot validate if it is a beta service or not. + """ + released_resources.append(resource) + continue + + enabled_status = UserFacingStatus.ACTIVE.value + if not ent_cls.is_beta or resource.get("status", "") == enabled_status: + released_resources.append(resource) + + if released_resources: + new_response["services"] = released_resources + + return new_response + + +def _get_config_status(cfg) -> Dict[str, Any]: + """Return a dict with execution_status, execution_details and notices. + + Values for execution_status will be one of UserFacingConfigStatus + enum: + inactive, active, reboot-required + execution_details will provide more details about that state. + notices is a list of tuples with label and description items. + """ + userStatus = UserFacingConfigStatus + status_val = userStatus.INACTIVE.value + status_desc = messages.NO_ACTIVE_OPERATIONS + (lock_pid, lock_holder) = cfg.check_lock_info() + notices = cfg.read_cache("notices") or [] + if lock_pid > 0: + status_val = userStatus.ACTIVE.value + status_desc = messages.LOCK_HELD.format( + pid=lock_pid, lock_holder=lock_holder + ).msg + elif os.path.exists(cfg.data_path("marker-reboot-cmds")): + status_val = userStatus.REBOOTREQUIRED.value + operation = "configuration changes" + for label, description in notices: + if label == "Reboot required": + operation = description + break + status_desc = messages.ENABLE_REBOOT_REQUIRED_TMPL.format( + operation=operation + ) + return { + "execution_status": status_val, + "execution_details": status_desc, + "notices": notices, + "config_path": cfg.cfg_path, + "config": cfg.cfg, + } + + +def status(cfg: UAConfig, show_beta: bool = False) -> Dict[str, Any]: + """Return status as a dict, using a cache for non-root users + + When unattached, get available resources from the contract service + to report detailed availability of different resources for this + machine. + + Write the status-cache when called by root. + """ + if os.getuid() != 0: + response = cast("Dict[str, Any]", cfg.read_cache("status-cache")) + if not response: + response = _unattached_status(cfg) + elif not cfg.is_attached: + response = _unattached_status(cfg) + else: + response = _attached_status(cfg) + + response.update(_get_config_status(cfg)) + + if os.getuid() == 0: + cfg.write_cache("status-cache", response) + + # Try to remove fix reboot notices if not applicable + if not util.should_reboot(): + cfg.remove_notice( + "", + messages.ENABLE_REBOOT_REQUIRED_TMPL.format( + operation="fix operation" + ), + ) + + response = _handle_beta_resources(cfg, show_beta, response) + + return response + + +def _get_entitlement_information( + entitlements: List[Dict[str, Any]], entitlement_name: str +) -> Dict[str, Any]: + """Extract information from the entitlements array.""" + for entitlement in entitlements: + if entitlement.get("type") == entitlement_name: + return { + "entitled": "yes" if entitlement.get("entitled") else "no", + "auto_enabled": "yes" + if entitlement.get("obligations", {}).get("enableByDefault") + else "no", + "affordances": entitlement.get("affordances", {}), + } + return {"entitled": "no", "auto_enabled": "no", "affordances": {}} + + +def simulate_status( + cfg, token: str, show_beta: bool = False +) -> Tuple[Dict[str, Any], int]: + """Get a status dictionary based on a token. + + Returns a tuple with the status dictionary and an integer value - 0 for + success, 1 for failure + """ + ret = 0 + response = copy.deepcopy(DEFAULT_STATUS) + + try: + contract_information = get_contract_information(cfg, token) + except exceptions.ContractAPIError as e: + if hasattr(e, "code") and e.code == 401: + raise exceptions.UserFacingError( + msg=messages.ATTACH_INVALID_TOKEN.msg, + msg_code=messages.ATTACH_INVALID_TOKEN.name, + ) + raise e + + contract_info = contract_information.get("contractInfo", {}) + account_info = contract_information.get("accountInfo", {}) + + response.update( + { + "version": version.get_version(features=cfg.features), + "contract": { + "id": contract_info.get("id", ""), + "name": contract_info.get("name", ""), + "created_at": contract_info.get("createdAt", ""), + "products": contract_info.get("products", []), + }, + "account": { + "name": account_info.get("name", ""), + "id": account_info.get("id"), + "created_at": account_info.get("createdAt", ""), + "external_account_ids": account_info.get( + "externalAccountIDs", [] + ), + }, + "simulated": True, + } + ) + + now = datetime.now(timezone.utc) + if contract_info.get("effectiveTo"): + response["expires"] = contract_info.get("effectiveTo") + expiration_datetime = util.parse_rfc3339_date(response["expires"]) + delta = expiration_datetime - now + if delta.total_seconds() <= 0: + message = messages.ATTACH_FORBIDDEN_EXPIRED.format( + contract_id=response["contract"]["id"], + date=expiration_datetime.strftime(ATTACH_FAIL_DATE_FORMAT), + ) + event.error(error_msg=message.msg, error_code=message.name) + event.info("This token is not valid.\n" + message.msg + "\n") + ret = 1 + if contract_info.get("effectiveFrom"): + response["effective"] = contract_info.get("effectiveFrom") + effective_datetime = util.parse_rfc3339_date(response["effective"]) + delta = now - effective_datetime + if delta.total_seconds() <= 0: + message = messages.ATTACH_FORBIDDEN_NOT_YET.format( + contract_id=response["contract"]["id"], + date=effective_datetime.strftime(ATTACH_FAIL_DATE_FORMAT), + ) + event.error(error_msg=message.msg, error_code=message.name) + event.info("This token is not valid.\n" + message.msg + "\n") + ret = 1 + + status_cache = cfg.read_cache("status-cache") + if status_cache: + resources = status_cache.get("services") + else: + resources = get_available_resources(cfg) + + entitlements = contract_info.get("resourceEntitlements", []) + + inapplicable_resources = [ + resource["name"] + for resource in sorted(resources, key=lambda x: x["name"]) + if not resource["available"] + ] + + for resource in resources: + entitlement_name = resource.get("name", "") + try: + ent_cls = entitlement_factory(cfg=cfg, name=entitlement_name) + except exceptions.EntitlementNotFoundError: + continue + ent = ent_cls(cfg=cfg) + entitlement_information = _get_entitlement_information( + entitlements, entitlement_name + ) + response["services"].append( + { + "name": resource.get("presentedAs", ent.name), + "description": ent.description, + "entitled": entitlement_information["entitled"], + "auto_enabled": entitlement_information["auto_enabled"], + "available": "yes" + if ent.name not in inapplicable_resources + else "no", + } + ) + response["services"].sort(key=lambda x: x.get("name", "")) + + support = _get_entitlement_information(entitlements, "support") + if support["entitled"]: + supportLevel = support["affordances"].get("supportLevel") + if supportLevel: + response["contract"]["tech_support_level"] = supportLevel + + response.update(_get_config_status(cfg)) + response = _handle_beta_resources(cfg, show_beta, response) + + return response, ret + def colorize(string: str) -> str: """Return colorized string if using a tty, else original string.""" @@ -318,7 +573,7 @@ ] for service in status["services"]: content.append(STATUS_UNATTACHED_TMPL.format(**service)) - content.extend(["", UNATTACHED.msg]) + content.extend(["", messages.UNATTACHED.msg]) return "\n".join(content) content = [STATUS_HEADER] @@ -364,24 +619,56 @@ return "\n".join(content) -def format_machine_readable_output(status: Dict[str, Any]) -> Dict[str, Any]: - status["environment_vars"] = [ - {"name": name, "value": value} - for name, value in sorted(os.environ.items()) - if name.lower() in CONFIG_FIELD_ENVVAR_ALLOWLIST - or name.startswith("UA_FEATURES") - or name == "UA_CONFIG_FILE" - ] +def help(cfg, name): + """Return help information from an uaclient service as a dict - if not status.get("simulated"): - available_services = [ - service - for service in status.get("services", []) - if service.get("available", "yes") == "yes" - ] - status["services"] = available_services + :param name: Name of the service for which to return help data. + + :raises: UserFacingError when no help is available. + """ + resources = get_available_resources(cfg) + help_resource = None + + # We are using an OrderedDict here to guarantee + # that if we need to print the result of this + # dict, the order of insertion will always be respected + response_dict = OrderedDict() + response_dict["name"] = name + + for resource in resources: + if resource["name"] == name or resource.get("presentedAs") == name: + try: + help_ent_cls = entitlement_factory( + cfg=cfg, name=resource["name"] + ) + except exceptions.EntitlementNotFoundError: + continue + help_resource = resource + help_ent = help_ent_cls(cfg) + break + + if help_resource is None: + raise exceptions.UserFacingError( + "No help available for '{}'".format(name) + ) + + if cfg.is_attached: + service_status = _attached_service_status(help_ent, {}) + status_msg = service_status["status"] + + response_dict["entitled"] = service_status["entitled"] + response_dict["status"] = status_msg + + if status_msg == "enabled" and help_ent_cls.is_beta: + response_dict["beta"] = True + + else: + if help_resource["available"]: + available = UserFacingAvailability.AVAILABLE.value + else: + available = UserFacingAvailability.UNAVAILABLE.value - # We don't need the origin info in the json output - status.pop("origin", "") + response_dict["available"] = available - return status + response_dict["help"] = help_ent.help_info + return response_dict diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_actions.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_actions.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_actions.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_actions.py 2022-05-18 19:44:15.000000000 +0000 @@ -25,7 +25,7 @@ ) @mock.patch(M_PATH + "identity.get_instance_id", return_value="my-iid") @mock.patch("uaclient.jobs.update_messaging.update_apt_and_motd_messages") - @mock.patch(M_PATH + "config.UAConfig.status") + @mock.patch("uaclient.status.status") @mock.patch(M_PATH + "contract.request_updated_contract") @mock.patch(M_PATH + "config.UAConfig.write_cache") def test_attach_with_token( @@ -50,7 +50,7 @@ else: attach_with_token(cfg, "token", False) if expect_status_call: - assert [mock.call()] == m_status.call_args_list + assert [mock.call(cfg=cfg)] == m_status.call_args_list if not expect_status_call: assert [ mock.call("instance-id", "my-iid") diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_apt.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_apt.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_apt.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_apt.py 2022-05-18 19:44:15.000000000 +0000 @@ -12,8 +12,8 @@ from uaclient import apt, exceptions, messages, util from uaclient.apt import ( APT_AUTH_COMMENT, - APT_CONFIG_PROXY_HTTP, - APT_CONFIG_PROXY_HTTPS, + APT_CONFIG_GLOBAL_PROXY_HTTP, + APT_CONFIG_GLOBAL_PROXY_HTTPS, APT_KEYS_DIR, APT_PROXY_CONF_FILE, APT_RETRIES, @@ -28,6 +28,7 @@ remove_apt_list_files, remove_auth_apt_repo, remove_repo_from_apt_auth_file, + run_apt_cache_policy_command, run_apt_update_command, setup_apt_proxy, ) @@ -167,7 +168,7 @@ ), ( 100, - "E: Failed to fetch ... HttpError401 on trusty", + "E: Failed to fetch ... HttpError401 on xenial", "Invalid APT credentials provided for http://fakerepo", ), ( @@ -916,6 +917,45 @@ expected_message = "\n".join(output_list) + "." assert expected_message == excinfo.value.msg + @mock.patch("uaclient.apt.util.subp") + def test_run_update_command_clean_apt_cache_policy_cache(self, m_subp): + m_subp.side_effect = [ + ("policy1", ""), + ("update", ""), + ("policy2", ""), + ] + + assert "policy1" == run_apt_cache_policy_command() + # Confirming that caching is happening + assert "policy1" == run_apt_cache_policy_command() + + run_apt_update_command() + + # Confirm cache was cleared + assert "policy2" == run_apt_cache_policy_command() + run_apt_cache_policy_command.cache_clear() + + @mock.patch("uaclient.apt.util.subp") + def test_failed_run_update_command_clean_apt_cache_policy_cache( + self, m_subp + ): + m_subp.side_effect = [ + ("policy1", ""), + exceptions.UserFacingError("test"), + ("policy2", ""), + ] + + assert "policy1" == run_apt_cache_policy_command() + # Confirming that caching is happening + assert "policy1" == run_apt_cache_policy_command() + + with pytest.raises(exceptions.UserFacingError): + run_apt_update_command() + + # Confirm cache was cleared + assert "policy2" == run_apt_cache_policy_command() + run_apt_cache_policy_command.cache_clear() + class TestAptProxyConfig: @pytest.mark.parametrize( @@ -929,12 +969,12 @@ mock.call( APT_PROXY_CONF_FILE, messages.APT_PROXY_CONFIG_HEADER - + APT_CONFIG_PROXY_HTTP.format( + + APT_CONFIG_GLOBAL_PROXY_HTTP.format( proxy_url="mock_http_proxy" ), ) ], - messages.SETTING_SERVICE_PROXY.format(service="APT"), + messages.SETTING_SERVICE_PROXY_SCOPE.format(scope="global"), ), ( {"https_proxy": "mock_https_proxy"}, @@ -943,12 +983,12 @@ mock.call( APT_PROXY_CONF_FILE, messages.APT_PROXY_CONFIG_HEADER - + APT_CONFIG_PROXY_HTTPS.format( + + APT_CONFIG_GLOBAL_PROXY_HTTPS.format( proxy_url="mock_https_proxy" ), ) ], - messages.SETTING_SERVICE_PROXY.format(service="APT"), + messages.SETTING_SERVICE_PROXY_SCOPE.format(scope="global"), ), ( { @@ -960,15 +1000,15 @@ mock.call( APT_PROXY_CONF_FILE, messages.APT_PROXY_CONFIG_HEADER - + APT_CONFIG_PROXY_HTTP.format( + + APT_CONFIG_GLOBAL_PROXY_HTTP.format( proxy_url="mock_http_proxy" ) - + APT_CONFIG_PROXY_HTTPS.format( + + APT_CONFIG_GLOBAL_PROXY_HTTPS.format( proxy_url="mock_https_proxy" ), ) ], - messages.SETTING_SERVICE_PROXY.format(service="APT"), + messages.SETTING_SERVICE_PROXY_SCOPE.format(scope="global"), ), ], ) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_attach.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_attach.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_attach.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_attach.py 2022-05-18 19:44:15.000000000 +0000 @@ -7,7 +7,7 @@ import pytest import yaml -from uaclient import event_logger, messages +from uaclient import event_logger, messages, status from uaclient.cli import ( UA_AUTH_TOKEN_URL, action_attach, @@ -43,7 +43,7 @@ }, } -ENTITLED_TRUSTY_ESM_RESOURCE = { +ENTITLED_EXAMPLE_ESM_RESOURCE = { "obligations": {"enableByDefault": True}, "type": "esm-infra", "directives": { @@ -59,12 +59,12 @@ "s390x", "x86_64", ], - "series": ["precise", "trusty", "xenial", "bionic"], + "series": ["series-example-1", "series-example-2", "series-example-3"], }, "series": { - "trusty": { + "series-example-1": { "directives": { - "suites": ["trusty-infra-security", "trusty-infra-updates"] + "suites": ["example-infra-security", "example-infra-updates"] } } }, @@ -74,7 +74,7 @@ ENTITLED_MACHINE_TOKEN = copy.deepcopy(BASIC_MACHINE_TOKEN) ENTITLED_MACHINE_TOKEN["machineTokenInfo"]["contractInfo"][ "resourceEntitlements" -] = [ENTITLED_TRUSTY_ESM_RESOURCE] +] = [ENTITLED_EXAMPLE_ESM_RESOURCE] @mock.patch(M_PATH + "os.getuid") @@ -244,7 +244,7 @@ ) @mock.patch("uaclient.util.should_reboot", return_value=False) @mock.patch("uaclient.config.UAConfig.remove_notice") - @mock.patch("uaclient.contract.get_available_resources") + @mock.patch("uaclient.status.get_available_resources") @mock.patch("uaclient.jobs.update_messaging.update_apt_and_motd_messages") @mock.patch(M_PATH + "contract.request_updated_contract") def test_status_updated_when_auto_enable_fails( @@ -264,7 +264,7 @@ token = "contract-token" args = mock.MagicMock(token=token, attach_config=None) cfg = FakeConfig() - cfg.status() # persist unattached status + status.status(cfg=cfg) # persist unattached status # read persisted status cache from disk orig_unattached_status = cfg.read_cache("status-cache") @@ -273,8 +273,10 @@ raise error_class(error_str) request_updated_contract.side_effect = fake_request_updated_contract - ret = action_attach(args, cfg=cfg) - assert 1 == ret + with pytest.raises(SystemExit) as excinfo: + main_error_handler(action_attach)(args, cfg) + + assert 1 == excinfo.value.code assert cfg.is_attached # Assert updated status cache is written to disk assert orig_unattached_status != cfg.read_cache( @@ -358,7 +360,7 @@ @pytest.mark.parametrize("auto_enable", (True, False)) @mock.patch("uaclient.util.should_reboot", return_value=False) @mock.patch("uaclient.config.UAConfig.remove_notice") - @mock.patch("uaclient.contract.get_available_resources") + @mock.patch("uaclient.status.get_available_resources") @mock.patch("uaclient.jobs.update_messaging.update_apt_and_motd_messages") def test_auto_enable_passed_through_to_request_updated_contract( self, @@ -472,10 +474,10 @@ @mock.patch("uaclient.util.handle_unicode_characters") @mock.patch("uaclient.status.format_tabular") @mock.patch(M_PATH + "actions.status") - @mock.patch("uaclient.jobs.disable_license_check_if_applicable") + @mock.patch("uaclient.daemon.stop") def test_attach_config_enable_services( self, - _m_disable_license_job, + _m_daemon_stop, m_status, m_format_tabular, m_handle_unicode, @@ -583,11 +585,14 @@ ) fake_stdout = io.StringIO() - with contextlib.redirect_stdout(fake_stdout): - with mock.patch.object( - event, "_event_logger_mode", event_logger.EventLoggerMode.JSON - ): - main_error_handler(action_attach)(args, cfg) + with pytest.raises(SystemExit): + with contextlib.redirect_stdout(fake_stdout): + with mock.patch.object( + event, + "_event_logger_mode", + event_logger.EventLoggerMode.JSON, + ): + main_error_handler(action_attach)(args, cfg) expected_msg = messages.ATTACH_FAILURE_DEFAULT_SERVICES expected = { @@ -623,53 +628,59 @@ parser = attach_parser(mock.Mock()) assert "Flags" == parser._optionals.title - def test_attach_parser_stores_token(self, _m_resources): - full_parser = get_parser() + def test_attach_parser_stores_token(self, _m_resources, FakeConfig): + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "attach", "token"]): args = full_parser.parse_args() assert "token" == args.token - def test_attach_parser_allows_empty_required_token(self, _m_resources): + def test_attach_parser_allows_empty_required_token( + self, _m_resources, FakeConfig + ): """Token required but parse_args allows none due to action_attach""" - full_parser = get_parser() + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "attach"]): args = full_parser.parse_args() assert None is args.token def test_attach_parser_help_points_to_ua_contract_dashboard_url( - self, _m_resources, capsys + self, _m_resources, capsys, FakeConfig ): """Contracts' dashboard URL is referenced by ua attach --help.""" - full_parser = get_parser() + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "attach", "--help"]): with pytest.raises(SystemExit): full_parser.parse_args() assert UA_AUTH_TOKEN_URL in capsys.readouterr()[0] def test_attach_parser_accepts_and_stores_no_auto_enable( - self, _m_resources + self, _m_resources, FakeConfig ): - full_parser = get_parser() + full_parser = get_parser(FakeConfig()) with mock.patch( "sys.argv", ["ua", "attach", "--no-auto-enable", "token"] ): args = full_parser.parse_args() assert not args.auto_enable - def test_attach_parser_defaults_to_auto_enable(self, _m_resources): - full_parser = get_parser() + def test_attach_parser_defaults_to_auto_enable( + self, _m_resources, FakeConfig + ): + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "attach", "token"]): args = full_parser.parse_args() assert args.auto_enable - def test_attach_parser_default_to_cli_format(self, _m_resources): - full_parser = get_parser() + def test_attach_parser_default_to_cli_format( + self, _m_resources, FakeConfig + ): + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "attach", "token"]): args = full_parser.parse_args() assert "cli" == args.format - def test_attach_parser_accepts_format_flag(self, _m_resources): - full_parser = get_parser() + def test_attach_parser_accepts_format_flag(self, _m_resources, FakeConfig): + full_parser = get_parser(FakeConfig()) with mock.patch( "sys.argv", ["ua", "attach", "token", "--format", "json"] ): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_auto_attach.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_auto_attach.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_auto_attach.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_auto_attach.py 2022-05-18 19:44:15.000000000 +0000 @@ -274,14 +274,16 @@ class TestParser: @mock.patch(M_PATH + "contract.get_available_resources") - def test_auto_attach_parser_updates_parser_config(self, _m_resources): + def test_auto_attach_parser_updates_parser_config( + self, _m_resources, FakeConfig + ): """Update the parser configuration for 'auto-attach'.""" m_parser = auto_attach_parser(mock.Mock()) assert "ua auto-attach [flags]" == m_parser.usage assert "auto-attach" == m_parser.prog assert "Flags" == m_parser._optionals.title - full_parser = get_parser() + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "auto-attach"]): args = full_parser.parse_args() assert "auto-attach" == args.command diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_collect_logs.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_collect_logs.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_collect_logs.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_collect_logs.py 2022-05-18 19:44:15.000000000 +0000 @@ -92,7 +92,7 @@ "-u", "ua-reboot-cmds.service", "-u", - "ua-license-check.service", + "ubuntu-advantage.service", "-u", "cloud-init-local.service", "-u", @@ -114,13 +114,7 @@ ["systemctl", "status", "ua-reboot-cmds.service"], rcs=[0, 3] ), mock.call( - ["systemctl", "status", "ua-license-check.path"], rcs=[0, 3] - ), - mock.call( - ["systemctl", "status", "ua-license-check.service"], rcs=[0, 3] - ), - mock.call( - ["systemctl", "status", "ua-license-check.timer"], rcs=[0, 3] + ["systemctl", "status", "ubuntu-advantage.service"], rcs=[0, 3] ), ] @@ -130,13 +124,15 @@ class TestParser: @mock.patch(M_PATH + "contract.get_available_resources") - def test_collect_logs_parser_updates_parser_config(self, _m_resources): + def test_collect_logs_parser_updates_parser_config( + self, _m_resources, FakeConfig + ): """Update the parser configuration for 'collect-logs'.""" m_parser = collect_logs_parser(mock.Mock()) assert "ua collect-logs [flags]" == m_parser.usage assert "collect-logs" == m_parser.prog - full_parser = get_parser() + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "collect-logs"]): args = full_parser.parse_args() assert "collect-logs" == args.command diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_config_set.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_config_set.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_config_set.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_config_set.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,8 +1,9 @@ import mock import pytest -from uaclient import status, util -from uaclient.cli import action_config_set, main +from uaclient import apt, messages, util +from uaclient.cli import action_config_set, configure_apt_proxy, main +from uaclient.entitlements.entitlement_status import ApplicationStatus from uaclient.exceptions import NonRootUserError, UserFacingError HELP_OUTPUT = """\ @@ -13,13 +14,13 @@ positional arguments: key_value_pair key=value pair to configure for Ubuntu Advantage services. Key must be one of: http_proxy, https_proxy, apt_http_proxy, - apt_https_proxy, update_messaging_timer, - update_status_timer, metering_timer + apt_https_proxy, ua_apt_http_proxy, ua_apt_https_proxy, + global_apt_http_proxy, global_apt_https_proxy, + update_messaging_timer, update_status_timer, metering_timer Flags: -h, --help show this help message and exit """ - M_LIVEPATCH = "uaclient.entitlements.livepatch." @@ -33,19 +34,25 @@ ( "k=v", " must be one of: http_proxy, https_proxy," - " apt_http_proxy, apt_https_proxy, update_messaging_timer," + " apt_http_proxy, apt_https_proxy, ua_apt_http_proxy," + " ua_apt_https_proxy, global_apt_http_proxy," + " global_apt_https_proxy, update_messaging_timer," " update_status_timer, metering_timer", ), ( "http_proxys=", " must be one of: http_proxy, https_proxy," - " apt_http_proxy, apt_https_proxy, update_messaging_timer," + " apt_http_proxy, apt_https_proxy, ua_apt_http_proxy," + " ua_apt_https_proxy, global_apt_http_proxy," + " global_apt_https_proxy, update_messaging_timer," " update_status_timer, metering_timer", ), ( "=value", " must be one of: http_proxy, https_proxy," - " apt_http_proxy, apt_https_proxy, update_messaging_timer," + " apt_http_proxy, apt_https_proxy, ua_apt_http_proxy," + " ua_apt_https_proxy, global_apt_http_proxy," + " global_apt_https_proxy, update_messaging_timer," " update_status_timer, metering_timer", ), ), @@ -112,12 +119,12 @@ """ if livepatch_enabled: livepatch_status.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, "", ) else: livepatch_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ) args = mock.MagicMock(key_value_pair="{}={}".format(key, value)) @@ -140,33 +147,56 @@ assert [] == configure_livepatch_proxy.call_args_list @pytest.mark.parametrize( - "key,value", + "key,value,scope,protocol_type", ( - ("apt_http_proxy", "http://proxy"), - ("apt_https_proxy", "https://proxy"), + ( + "apt_http_proxy", + "http://proxy", + apt.AptProxyScope.GLOBAL, + "http", + ), + ( + "apt_https_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + "https", + ), ), ) - @mock.patch("uaclient.apt.setup_apt_proxy") + @mock.patch("uaclient.cli.configure_apt_proxy") @mock.patch("uaclient.util.validate_proxy") - def test_set_apt_http_proxy_and_apt_https_proxy_persists_config_changes( + def test_set_apt_http_proxy_and_apt_https_proxy_prints_warning( self, validate_proxy, - setup_apt_proxy, + configure_apt_proxy, _m_resources, _getuid, _write_cfg, key, value, + scope, + protocol_type, FakeConfig, + capsys, ): - """Set calls setup_apt_proxy, persists config and exits 0.""" + """Set calls setup_apt_proxy but prints warning + and sets global_* and exits 0.""" args = mock.MagicMock(key_value_pair="{}={}".format(key, value)) cfg = FakeConfig() action_config_set(args, cfg=cfg) - kwargs = {"http_proxy": None, "https_proxy": None} + out, err = capsys.readouterr() + global_eq = "global_" + key + assert [ + mock.call(cfg, apt.AptProxyScope.GLOBAL, global_eq, value) + ] == configure_apt_proxy.call_args_list + assert ( + messages.WARNING_APT_PROXY_SETUP.format( + protocol_type=protocol_type + ) + in out + ) + proxy_type = key.replace("apt_", "") - kwargs[proxy_type] = value - assert [mock.call(**kwargs)] == setup_apt_proxy.call_args_list if proxy_type == "http_proxy": url = util.PROXY_VALIDATION_APT_HTTP_URL else: @@ -175,6 +205,312 @@ mock.call(proxy_type.replace("_proxy", ""), value, url) ] == validate_proxy.call_args_list + assert getattr(cfg, global_eq) == value + assert cfg.ua_apt_https_proxy is None + assert cfg.ua_apt_http_proxy is None + + @pytest.mark.parametrize( + "key,value,scope,apt_equ,ua_apt_equ", + ( + ( + "global_apt_http_proxy", + "http://proxy", + apt.AptProxyScope.GLOBAL, + None, + None, + ), + ( + "global_apt_https_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + "https://proxy", + None, + ), + ( + "global_apt_http_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + "https://proxy", + None, + ), + ( + "global_apt_http_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + None, + "https://proxy", + ), + ( + "global_apt_https_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + None, + "https://proxy", + ), + ( + "global_apt_https_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + "https://proxy", + "https://proxy", + ), + ( + "global_apt_http_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + "https://proxy", + "https://proxy", + ), + ( + "global_apt_http_proxy", + "", + apt.AptProxyScope.GLOBAL, + "https://proxy", + "https://proxy", + ), + ), + ) + @mock.patch("uaclient.cli.configure_apt_proxy") + @mock.patch("uaclient.util.validate_proxy") + def test_set_global_apt_http_and_global_apt_https_proxy( + self, + validate_proxy, + configure_apt_proxy, + _m_resources, + _getuid, + _write_cfg, + key, + value, + scope, + apt_equ, + ua_apt_equ, + FakeConfig, + capsys, + ): + """Test setting of global_apt_* proxies""" + args = mock.MagicMock(key_value_pair="{}={}".format(key, value)) + cfg = FakeConfig() + cfg.ua_apt_https_proxy = ua_apt_equ + cfg.ua_apt_http_proxy = ua_apt_equ + action_config_set(args, cfg=cfg) + out, err = capsys.readouterr() # will need to check output + if ua_apt_equ: + assert [ + mock.call(cfg, scope, key, value) + ] == configure_apt_proxy.call_args_list + assert ( + messages.WARNING_APT_PROXY_OVERWRITE.format( + current_proxy="global apt", previous_proxy="ua scoped apt" + ) + in out + ) + else: + assert [ + mock.call(cfg, apt.AptProxyScope.GLOBAL, key, value) + ] == configure_apt_proxy.call_args_list + + proxy_type = key.replace("global_apt_", "") + if proxy_type == "http_proxy": + url = util.PROXY_VALIDATION_APT_HTTP_URL + else: + url = util.PROXY_VALIDATION_APT_HTTPS_URL + assert [ + mock.call(proxy_type.replace("_proxy", ""), value, url) + ] == validate_proxy.call_args_list + assert cfg.ua_apt_https_proxy is None + assert cfg.ua_apt_http_proxy is None + + @pytest.mark.parametrize( + "key,value,scope,apt_equ,global_apt_equ", + ( + ( + "ua_apt_http_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + None, + None, + ), + ( + "ua_apt_https_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + "https://proxy", + "https://proxy", + ), + ( + "ua_apt_https_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + "https://proxy", + None, + ), + ( + "ua_apt_http_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + "https://proxy", + None, + ), + ( + "ua_apt_https_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + None, + "https://proxy", + ), + ( + "ua_apt_http_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + None, + "https://proxy", + ), + ( + "ua_apt_http_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + "https://proxy", + "https://proxy", + ), + ( + "ua_apt_https_proxy", + "", + apt.AptProxyScope.UACLIENT, + "https://proxy", + "https://proxy", + ), + ), + ) + @mock.patch("uaclient.cli.configure_apt_proxy") + @mock.patch("uaclient.util.validate_proxy") + def test_set_ua_apt_http_and_ua_apt_https_proxy( + self, + validate_proxy, + configure_apt_proxy, + _m_resources, + _getuid, + _write_cfg, + key, + value, + scope, + apt_equ, + global_apt_equ, + FakeConfig, + capsys, + ): + """Test setting of ua_apt_* proxies""" + args = mock.MagicMock(key_value_pair="{}={}".format(key, value)) + cfg = FakeConfig() + cfg.global_apt_http_proxy = global_apt_equ + cfg.global_apt_https_proxy = global_apt_equ + action_config_set(args, cfg=cfg) + out, err = capsys.readouterr() # will need to check output + if global_apt_equ: + assert [ + mock.call(cfg, scope, key, value) + ] == configure_apt_proxy.call_args_list + assert ( + messages.WARNING_APT_PROXY_OVERWRITE.format( + current_proxy="ua scoped apt", previous_proxy="global apt" + ) + in out + ) + else: + assert [ + mock.call(cfg, apt.AptProxyScope.UACLIENT, key, value) + ] == configure_apt_proxy.call_args_list + + proxy_type = key.replace("ua_apt_", "") + if proxy_type == "http_proxy": + url = util.PROXY_VALIDATION_APT_HTTP_URL + else: + url = util.PROXY_VALIDATION_APT_HTTPS_URL + assert [ + mock.call(proxy_type.replace("_proxy", ""), value, url) + ] == validate_proxy.call_args_list + assert cfg.global_apt_https_proxy is None + assert cfg.global_apt_http_proxy is None + + @pytest.mark.parametrize( + "key,value,scope", + ( + ( + "global_apt_https_proxy", + "http://proxy", + apt.AptProxyScope.GLOBAL, + ), + ( + "global_apt_http_proxy", + "https://proxy", + apt.AptProxyScope.GLOBAL, + ), + ("global_apt_https_proxy", None, apt.AptProxyScope.GLOBAL), + ), + ) + @mock.patch("uaclient.cli.setup_apt_proxy") + def test_configure_global_apt_proxy( + self, + setup_apt_proxy, + _m_resources, + _getuid, + _write_cfg, + key, + value, + scope, + FakeConfig, + ): + cfg = FakeConfig() + cfg.global_apt_http_proxy = value + cfg.global_apt_https_proxy = value + configure_apt_proxy(cfg, scope, key, value) + kwargs = { + "http_proxy": cfg.global_apt_http_proxy, + "https_proxy": cfg.global_apt_https_proxy, + "proxy_scope": scope, + } + assert 1 == setup_apt_proxy.call_count + assert [mock.call(**kwargs)] == setup_apt_proxy.call_args_list + + @pytest.mark.parametrize( + "key,value,scope", + ( + ( + "global_apt_https_proxy", + "http://proxy", + apt.AptProxyScope.UACLIENT, + ), + ( + "global_apt_http_proxy", + "https://proxy", + apt.AptProxyScope.UACLIENT, + ), + ("global_apt_https_proxy", None, apt.AptProxyScope.UACLIENT), + ), + ) + @mock.patch("uaclient.cli.setup_apt_proxy") + def test_configure_uaclient_apt_proxy( + self, + setup_apt_proxy, + _m_resources, + _getuid, + _write_cfg, + key, + value, + scope, + FakeConfig, + ): + cfg = FakeConfig() + cfg.ua_apt_http_proxy = value + cfg.ua_apt_https_proxy = value + configure_apt_proxy(cfg, scope, key, value) + kwargs = { + "http_proxy": cfg.ua_apt_http_proxy, + "https_proxy": cfg.ua_apt_https_proxy, + "proxy_scope": scope, + } + assert 1 == setup_apt_proxy.call_count + assert [mock.call(**kwargs)] == setup_apt_proxy.call_args_list + def test_set_timer_interval( self, _m_resources, _getuid, _write_cfg, FakeConfig ): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_config_show.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_config_show.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_config_show.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_config_show.py 2022-05-18 19:44:15.000000000 +0000 @@ -71,8 +71,8 @@ None, "https_proxy", "http_proxy", - "apt_http_proxy", - "apt_https_proxy", + "global_apt_http_proxy", + "global_apt_https_proxy", ), ) @mock.patch("uaclient.config.UAConfig.write_cfg") @@ -82,8 +82,8 @@ cfg = FakeConfig() cfg.http_proxy = "http://http_proxy" cfg.https_proxy = "http://https_proxy" - cfg.apt_http_proxy = "http://apt_http_proxy" - cfg.apt_https_proxy = "http://apt_https_proxy" + cfg.global_apt_http_proxy = "http://global_apt_http_proxy" + cfg.global_apt_https_proxy = "http://global_apt_https_proxy" args = mock.MagicMock(key=optional_key) action_config_show(args, cfg=cfg) out, err = capsys.readouterr() @@ -94,8 +94,12 @@ """\ http_proxy http://http_proxy https_proxy http://https_proxy -apt_http_proxy http://apt_http_proxy -apt_https_proxy http://apt_https_proxy +apt_http_proxy None +apt_https_proxy None +ua_apt_http_proxy None +ua_apt_https_proxy None +global_apt_http_proxy http://global_apt_http_proxy +global_apt_https_proxy http://global_apt_https_proxy update_messaging_timer None update_status_timer None metering_timer None diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_config_unset.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_config_unset.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_config_unset.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_config_unset.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,8 +1,8 @@ import mock import pytest -from uaclient import status from uaclient.cli import action_config_unset, main +from uaclient.entitlements.entitlement_status import ApplicationStatus from uaclient.exceptions import NonRootUserError HELP_OUTPUT = """\ @@ -13,7 +13,9 @@ positional arguments: key configuration key to unset from Ubuntu Advantage services. One of: http_proxy, https_proxy, apt_http_proxy, apt_https_proxy, - update_messaging_timer, update_status_timer, metering_timer + ua_apt_http_proxy, ua_apt_https_proxy, global_apt_http_proxy, + global_apt_https_proxy, update_messaging_timer, + update_status_timer, metering_timer Flags: -h, --help show this help message and exit @@ -32,14 +34,18 @@ ( "junk", " must be one of: http_proxy, https_proxy," - " apt_http_proxy, apt_https_proxy, update_messaging_timer, " - "update_status_timer, metering_timer", + " apt_http_proxy, apt_https_proxy, ua_apt_http_proxy," + " ua_apt_https_proxy, global_apt_http_proxy," + " global_apt_https_proxy, update_messaging_timer," + " update_status_timer, metering_timer", ), ( "http_proxys", " must be one of: http_proxy, https_proxy," - " apt_http_proxy, apt_https_proxy, update_messaging_timer, " - "update_status_timer, metering_timer", + " apt_http_proxy, apt_https_proxy, ua_apt_http_proxy," + " ua_apt_https_proxy, global_apt_http_proxy," + " global_apt_https_proxy, update_messaging_timer," + " update_status_timer, metering_timer", ), ), ) @@ -97,12 +103,12 @@ """ if livepatch_enabled: livepatch_status.return_value = ( - status.ApplicationStatus.ENABLED, + ApplicationStatus.ENABLED, "", ) else: livepatch_status.return_value = ( - status.ApplicationStatus.DISABLED, + ApplicationStatus.DISABLED, "", ) args = mock.MagicMock(key=key) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_detach.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_detach.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_detach.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_detach.py 2022-05-18 19:44:15.000000000 +0000 @@ -426,24 +426,28 @@ assert "Flags" == parser._optionals.title @mock.patch("uaclient.cli.contract.get_available_resources") - def test_detach_parser_accepts_and_stores_assume_yes(self, _m_resources): - full_parser = get_parser() + def test_detach_parser_accepts_and_stores_assume_yes( + self, _m_resources, FakeConfig + ): + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "detach", "--assume-yes"]): args = full_parser.parse_args() assert args.assume_yes @mock.patch("uaclient.cli.contract.get_available_resources") - def test_detach_parser_defaults_to_not_assume_yes(self, _m_resources): - full_parser = get_parser() + def test_detach_parser_defaults_to_not_assume_yes( + self, _m_resources, FakeConfig + ): + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "detach"]): args = full_parser.parse_args() assert not args.assume_yes @mock.patch("uaclient.cli.contract.get_available_resources") - def test_detach_parser_with_json_format(self, _m_resources): - full_parser = get_parser() + def test_detach_parser_with_json_format(self, _m_resources, FakeConfig): + full_parser = get_parser(FakeConfig()) with mock.patch("sys.argv", ["ua", "detach", "--format", "json"]): args = full_parser.parse_args() diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_disable.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_disable.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_disable.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_disable.py 2022-05-18 19:44:15.000000000 +0000 @@ -6,12 +6,20 @@ import mock import pytest -from uaclient import entitlements, event_logger, exceptions, messages, status +from uaclient import config, entitlements, event_logger, exceptions, messages from uaclient.cli import action_disable, main, main_error_handler +from uaclient.entitlements.entitlement_status import ( + CanDisableFailure, + CanDisableFailureReason, +) ALL_SERVICE_MSG = "\n".join( textwrap.wrap( - "Try " + ", ".join(entitlements.valid_services(allow_beta=True)) + ".", + "Try " + + ", ".join( + entitlements.valid_services(cfg=config.UAConfig(), allow_beta=True) + ) + + ".", width=80, break_long_words=False, break_on_hyphens=False, @@ -55,8 +63,10 @@ ) @mock.patch("uaclient.cli.entitlements.entitlement_factory") @mock.patch("uaclient.cli.entitlements.valid_services") + @mock.patch("uaclient.status.status") def test_entitlement_instantiated_and_disabled( self, + m_status, m_valid_services, m_entitlement_factory, _m_getuid, @@ -67,6 +77,7 @@ tmpdir, capsys, event, + FakeConfig, ): entitlements_cls = [] entitlements_obj = [] @@ -74,8 +85,8 @@ m_valid_services.return_value = [] if not disable_return: - fail = status.CanDisableFailure( - status.CanDisableFailureReason.ALREADY_DISABLED, + fail = CanDisableFailure( + CanDisableFailureReason.ALREADY_DISABLED, message=messages.NamedMessage("test-code", "test"), ) else: @@ -95,24 +106,22 @@ return_value=entitlement_name ) - def factory_side_effect(name, ent_dict=ent_dict): + def factory_side_effect(cfg, name, ent_dict=ent_dict): return ent_dict.get(name) m_entitlement_factory.side_effect = factory_side_effect - m_cfg = mock.Mock() - m_cfg.check_lock_info.return_value = (-1, "") - m_cfg.data_path.return_value = tmpdir.join("lock").strpath - + cfg = FakeConfig.for_attached_machine() args_mock = mock.Mock() args_mock.service = service args_mock.assume_yes = assume_yes - ret = action_disable(args_mock, cfg=m_cfg) + with mock.patch.object(cfg, "check_lock_info", return_value=(-1, "")): + ret = action_disable(args_mock, cfg=cfg) for m_entitlement_cls in entitlements_cls: assert [ - mock.call(m_cfg, assume_yes=assume_yes) + mock.call(cfg, assume_yes=assume_yes) ] == m_entitlement_cls.call_args_list expected_disable_call = mock.call() @@ -122,17 +131,21 @@ ] == m_entitlement.disable.call_args_list assert return_code == ret - assert len(entitlements_cls) == m_cfg.status.call_count + assert len(entitlements_cls) == m_status.call_count + cfg = FakeConfig.for_attached_machine() args_mock.assume_yes = True args_mock.format = "json" with mock.patch.object( event, "_event_logger_mode", event_logger.EventLoggerMode.JSON ): with mock.patch.object(event, "set_event_mode"): - fake_stdout = io.StringIO() - with contextlib.redirect_stdout(fake_stdout): - ret = action_disable(args_mock, cfg=m_cfg) + with mock.patch.object( + cfg, "check_lock_info", return_value=(-1, "") + ): + fake_stdout = io.StringIO() + with contextlib.redirect_stdout(fake_stdout): + ret = action_disable(args_mock, cfg=cfg) expected = { "_schema_version": event_logger.JSON_SCHEMA_VERSION, @@ -161,14 +174,17 @@ @pytest.mark.parametrize("assume_yes", (True, False)) @mock.patch("uaclient.entitlements.entitlement_factory") @mock.patch("uaclient.entitlements.valid_services") + @mock.patch("uaclient.status.status") def test_entitlements_not_found_disabled_and_enabled( self, + m_status, m_valid_services, m_entitlement_factory, _m_getuid, assume_yes, tmpdir, event, + FakeConfig, ): expected_error_tmpl = messages.INVALID_SERVICE_OP_FAILURE num_calls = 2 @@ -177,8 +193,8 @@ m_ent1_obj = m_ent1_cls.return_value m_ent1_obj.disable.return_value = ( False, - status.CanDisableFailure( - status.CanDisableFailureReason.ALREADY_DISABLED, + CanDisableFailure( + CanDisableFailureReason.ALREADY_DISABLED, message=messages.NamedMessage("test-code", "test"), ), ) @@ -188,8 +204,8 @@ m_ent2_obj = m_ent2_cls.return_value m_ent2_obj.disable.return_value = ( False, - status.CanDisableFailure( - status.CanDisableFailureReason.ALREADY_DISABLED, + CanDisableFailure( + CanDisableFailureReason.ALREADY_DISABLED, message=messages.NamedMessage("test-code2", "test2"), ), ) @@ -200,7 +216,7 @@ m_ent3_obj.disable.return_value = (True, None) type(m_ent3_obj).name = mock.PropertyMock(return_value="ent3") - def factory_side_effect(name): + def factory_side_effect(cfg, name): if name == "ent2": return m_ent2_cls if name == "ent3": @@ -210,26 +226,29 @@ m_entitlement_factory.side_effect = factory_side_effect m_valid_services.return_value = ["ent2", "ent3"] - m_cfg = mock.Mock() - m_cfg.check_lock_info.return_value = (-1, "") - m_cfg.data_path.return_value = tmpdir.join("lock").strpath + cfg = FakeConfig.for_attached_machine() args_mock = mock.Mock() args_mock.service = ["ent1", "ent2", "ent3"] args_mock.assume_yes = assume_yes with pytest.raises(exceptions.UserFacingError) as err: - action_disable(args_mock, cfg=m_cfg) + with mock.patch.object( + cfg, "check_lock_info", return_value=(-1, "") + ): + action_disable(args_mock, cfg=cfg) assert ( expected_error_tmpl.format( - operation="disable", name="ent1", service_msg="Try ent2, ent3." + operation="disable", + invalid_service="ent1", + service_msg="Try ent2, ent3.", ).msg == err.value.msg ) for m_ent_cls in [m_ent2_cls, m_ent3_cls]: assert [ - mock.call(m_cfg, assume_yes=assume_yes) + mock.call(cfg, assume_yes=assume_yes) ] == m_ent_cls.call_args_list expected_disable_call = mock.call() @@ -237,8 +256,9 @@ assert [expected_disable_call] == m_ent.disable.call_args_list assert 0 == m_ent1_obj.call_count - assert num_calls == m_cfg.status.call_count + assert num_calls == m_status.call_count + cfg = FakeConfig.for_attached_machine() args_mock.assume_yes = True args_mock.format = "json" with pytest.raises(SystemExit): @@ -246,9 +266,14 @@ event, "_event_logger_mode", event_logger.EventLoggerMode.JSON ): with mock.patch.object(event, "set_event_mode"): - fake_stdout = io.StringIO() - with contextlib.redirect_stdout(fake_stdout): - main_error_handler(action_disable)(args_mock, m_cfg) + with mock.patch.object( + cfg, "check_lock_info", return_value=(-1, "") + ): + fake_stdout = io.StringIO() + with contextlib.redirect_stdout(fake_stdout): + main_error_handler(action_disable)( + args_mock, cfg=cfg + ) expected = { "_schema_version": event_logger.JSON_SCHEMA_VERSION, @@ -296,7 +321,9 @@ if not uid: expected_error = expected_error_template.format( - operation="disable", name="bogus", service_msg=ALL_SERVICE_MSG + operation="disable", + invalid_service="bogus", + service_msg=ALL_SERVICE_MSG, ) else: expected_error = expected_error_template @@ -344,7 +371,7 @@ args = mock.MagicMock() expected_error = expected_error_tmpl.format( operation="disable", - name=", ".join(sorted(service)), + invalid_service=", ".join(sorted(service)), service_msg=ALL_SERVICE_MSG, ) with pytest.raises(exceptions.UserFacingError) as err: @@ -385,7 +412,7 @@ @pytest.mark.parametrize( "uid,expected_error_template", [ - (0, messages.ENABLE_FAILURE_UNATTACHED), + (0, messages.VALID_SERVICE_FAILURE_UNATTACHED), (1000, messages.NONROOT_USER), ], ) @@ -397,8 +424,11 @@ cfg = FakeConfig() args = mock.MagicMock() + args.command = "disable" if not uid: - expected_error = expected_error_template.format(name="esm-infra") + expected_error = expected_error_template.format( + valid_service="esm-infra" + ) else: expected_error = expected_error_template diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_enable.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_enable.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_enable.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_enable.py 2022-05-18 19:44:15.000000000 +0000 @@ -6,8 +6,12 @@ import mock import pytest -from uaclient import entitlements, event_logger, exceptions, messages, status +from uaclient import entitlements, event_logger, exceptions, messages from uaclient.cli import action_enable, main, main_error_handler +from uaclient.entitlements.entitlement_status import ( + CanEnableFailure, + CanEnableFailureReason, +) HELP_OUTPUT = textwrap.dedent( """\ @@ -153,7 +157,7 @@ @pytest.mark.parametrize( "uid,expected_error_template", [ - (0, messages.ENABLE_FAILURE_UNATTACHED), + (0, messages.VALID_SERVICE_FAILURE_UNATTACHED), (1000, messages.NONROOT_USER), ], ) @@ -173,10 +177,13 @@ cfg = FakeConfig() args = mock.MagicMock() + args.command = "enable" args.service = ["esm-infra"] if not uid: - expected_error = expected_error_template.format(name="esm-infra") + expected_error = expected_error_template.format( + valid_service="esm-infra" + ) else: expected_error = expected_error_template @@ -208,6 +215,7 @@ } assert expected == json.loads(capsys.readouterr()[0]) + @pytest.mark.parametrize("is_attached", (True, False)) @pytest.mark.parametrize( "uid,expected_error_template", [ @@ -221,34 +229,111 @@ m_getuid, uid, expected_error_template, + is_attached, event, FakeConfig, ): """Check invalid service name results in custom error message.""" m_getuid.return_value = uid - cfg = FakeConfig.for_attached_machine() + if is_attached: + cfg = FakeConfig.for_attached_machine() + service_msg = "\n".join( + textwrap.wrap( + ( + "Try " + + ", ".join( + entitlements.valid_services( + cfg=cfg, allow_beta=True + ) + ) + + "." + ), + width=80, + break_long_words=False, + break_on_hyphens=False, + ) + ) + else: + cfg = FakeConfig() + service_msg = "See https://ubuntu.com/advantage" + args = mock.MagicMock() args.service = ["bogus"] + args.command = "enable" with pytest.raises(exceptions.UserFacingError) as err: action_enable(args, cfg) - service_msg = "\n".join( - textwrap.wrap( - ( - "Try " - + ", ".join(entitlements.valid_services(allow_beta=True)) - + "." - ), - width=80, - break_long_words=False, - break_on_hyphens=False, + if not uid: + expected_error = expected_error_template.format( + operation="enable", + invalid_service="bogus", + service_msg=service_msg, ) - ) + else: + expected_error = expected_error_template + + assert expected_error.msg == err.value.msg + + with pytest.raises(SystemExit): + with mock.patch.object( + event, "_event_logger_mode", event_logger.EventLoggerMode.JSON + ): + fake_stdout = io.StringIO() + with contextlib.redirect_stdout(fake_stdout): + main_error_handler(action_enable)(args, cfg) + + expected = { + "_schema_version": event_logger.JSON_SCHEMA_VERSION, + "result": "failure", + "errors": [ + { + "message": expected_error.msg, + "message_code": expected_error.name, + "service": None, + "type": "system", + } + ], + "failed_services": ["bogus"] if not uid and is_attached else [], + "needs_reboot": False, + "processed_services": [], + "warnings": [], + } + assert expected == json.loads(fake_stdout.getvalue()) + + @pytest.mark.parametrize( + "uid,expected_error_template", + [ + (0, messages.MIXED_SERVICES_FAILURE_UNATTACHED), + (1000, messages.NONROOT_USER), + ], + ) + def test_unattached_invalid_and_valid_service_error_message( + self, + _request_updated_contract, + m_getuid, + uid, + expected_error_template, + event, + FakeConfig, + ): + """Check invalid service name results in custom error message.""" + + m_getuid.return_value = uid + cfg = FakeConfig() + + args = mock.MagicMock() + args.service = ["bogus", "fips"] + args.command = "enable" + with pytest.raises(exceptions.UserFacingError) as err: + action_enable(args, cfg) if not uid: expected_error = expected_error_template.format( - operation="enable", name="bogus", service_msg=service_msg + operation="enable", + valid_service="fips", + invalid_service="bogus", + service_msg="", ) else: expected_error = expected_error_template @@ -274,7 +359,7 @@ "type": "system", } ], - "failed_services": ["bogus"] if not uid else [], + "failed_services": [], "needs_reboot": False, "processed_services": [], "warnings": [], @@ -282,7 +367,7 @@ assert expected == json.loads(fake_stdout.getvalue()) @pytest.mark.parametrize("assume_yes", (True, False)) - @mock.patch("uaclient.contract.get_available_resources", return_value={}) + @mock.patch("uaclient.status.get_available_resources", return_value={}) @mock.patch("uaclient.entitlements.valid_services") def test_assume_yes_passed_to_service_init( self, @@ -322,7 +407,7 @@ ) ] == m_entitlement_cls.call_args_list - @mock.patch("uaclient.contract.get_available_resources", return_value={}) + @mock.patch("uaclient.status.get_available_resources", return_value={}) @mock.patch("uaclient.entitlements.entitlement_factory") @mock.patch("uaclient.entitlements.valid_services") def test_entitlements_not_found_disabled_and_enabled( @@ -349,7 +434,7 @@ m_ent2_obj = m_ent2_cls.return_value m_ent2_obj.enable.return_value = ( False, - status.CanEnableFailure(status.CanEnableFailureReason.IS_BETA), + CanEnableFailure(CanEnableFailureReason.IS_BETA), ) m_ent3_cls = mock.Mock() @@ -359,7 +444,7 @@ m_ent3_obj = m_ent3_cls.return_value m_ent3_obj.enable.return_value = (True, None) - def factory_side_effect(name, not_found_okay=True): + def factory_side_effect(cfg, name, not_found_okay=True): if name == "ent2": return m_ent2_cls if name == "ent3": @@ -385,7 +470,7 @@ expected_error = expected_error_tmpl.format( operation="enable", - name="ent1, ent2", + invalid_service="ent1, ent2", service_msg=( "Try " + ", ".join(entitlements.valid_services(allow_beta=False)) @@ -439,7 +524,7 @@ assert expected == json.loads(fake_stdout.getvalue()) @pytest.mark.parametrize("beta_flag", ((False), (True))) - @mock.patch("uaclient.contract.get_available_resources", return_value={}) + @mock.patch("uaclient.status.get_available_resources", return_value={}) @mock.patch("uaclient.entitlements.entitlement_factory") @mock.patch("uaclient.entitlements.valid_services") def test_entitlements_not_found_and_beta( @@ -465,9 +550,7 @@ m_ent2_is_beta = mock.PropertyMock(return_value=True) type(m_ent2_cls)._is_beta = m_ent2_is_beta m_ent2_obj = m_ent2_cls.return_value - failure_reason = status.CanEnableFailure( - status.CanEnableFailureReason.IS_BETA - ) + failure_reason = CanEnableFailure(CanEnableFailureReason.IS_BETA) if beta_flag: m_ent2_obj.enable.return_value = (True, None) else: @@ -487,7 +570,7 @@ args_mock.assume_yes = assume_yes args_mock.beta = beta_flag - def factory_side_effect(name, not_found_okay=True): + def factory_side_effect(cfg, name, not_found_okay=True): if name == "ent2": return m_ent2_cls if name == "ent3": @@ -496,7 +579,7 @@ m_entitlement_factory.side_effect = factory_side_effect - def valid_services_side_effect(allow_beta, all_names=False): + def valid_services_side_effect(cfg, allow_beta, all_names=False): if allow_beta: return ["ent2", "ent3"] return ["ent2"] @@ -508,7 +591,7 @@ mock_ent_list = [m_ent3_cls] mock_obj_list = [m_ent3_obj] - service_names = entitlements.valid_services(allow_beta=beta_flag) + service_names = entitlements.valid_services(cfg, allow_beta=beta_flag) ent_str = "Try " + ", ".join(service_names) + "." if not beta_flag: not_found_name += ", ent2" @@ -530,7 +613,9 @@ action_enable(args_mock, cfg) expected_error = expected_error_tmpl.format( - operation="enable", name=not_found_name, service_msg=service_msg + operation="enable", + invalid_service=not_found_name, + service_msg=service_msg, ) assert expected_error.msg == err.value.msg assert expected_msg == fake_stdout.getvalue() @@ -582,7 +667,7 @@ } assert expected == json.loads(fake_stdout.getvalue()) - @mock.patch("uaclient.contract.get_available_resources", return_value={}) + @mock.patch("uaclient.status.get_available_resources", return_value={}) def test_print_message_when_can_enable_fails( self, _m_get_available_resources, @@ -597,8 +682,8 @@ m_entitlement_obj = m_entitlement_cls.return_value m_entitlement_obj.enable.return_value = ( False, - status.CanEnableFailure( - status.CanEnableFailureReason.ALREADY_ENABLED, + CanEnableFailure( + CanEnableFailureReason.ALREADY_ENABLED, message=messages.NamedMessage("test-code", "msg"), ), ) @@ -685,7 +770,7 @@ assert expected_msg == fake_stdout.getvalue() - service_names = entitlements.valid_services(allow_beta=beta) + service_names = entitlements.valid_services(cfg=cfg, allow_beta=beta) ent_str = "Try " + ", ".join(service_names) + "." service_msg = "\n".join( textwrap.wrap( @@ -697,7 +782,7 @@ ) expected_error = expected_error_tmpl.format( operation="enable", - name=", ".join(sorted(service)), + invalid_service=", ".join(sorted(service)), service_msg=service_msg, ) assert expected_error.msg == err.value.msg @@ -729,9 +814,11 @@ assert expected == json.loads(fake_stdout.getvalue()) @pytest.mark.parametrize("allow_beta", ((True), (False))) - @mock.patch("uaclient.contract.get_available_resources", return_value={}) + @mock.patch("uaclient.status.get_available_resources", return_value={}) + @mock.patch("uaclient.status.status") def test_entitlement_instantiated_and_enabled( self, + m_status, _m_get_available_resources, _m_request_updated_contract, m_getuid, @@ -745,7 +832,6 @@ m_entitlement_obj.enable.return_value = (True, None) cfg = FakeConfig.for_attached_machine() - cfg.status = mock.Mock() args_mock = mock.MagicMock() args_mock.assume_yes = False @@ -775,7 +861,7 @@ expected_ret = 0 assert [expected_enable_call] == m_entitlement.enable.call_args_list assert expected_ret == ret - assert 1 == cfg.status.call_count + assert 1 == m_status.call_count with mock.patch( "uaclient.entitlements.entitlement_factory", diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli.py 2022-05-18 19:44:15.000000000 +0000 @@ -90,11 +90,12 @@ @pytest.fixture(params=["direct", "--help", "ua help", "ua help --all"]) -def get_help(request, capsys): +def get_help(request, capsys, FakeConfig): + cfg = FakeConfig() if request.param == "direct": def _get_help_output(): - parser = get_parser() + parser = get_parser(cfg) help_file = io.StringIO() parser.print_help(file=help_file) return (help_file.getvalue(), "base") @@ -102,7 +103,7 @@ elif request.param == "--help": def _get_help_output(): - parser = get_parser() + parser = get_parser(cfg) with mock.patch("sys.argv", ["ua", "--help"]): with pytest.raises(SystemExit): parser.parse_args() @@ -177,7 +178,7 @@ ("json", {"name": "test", "available": "yes", "help": "Test"}), ), ) - @mock.patch("uaclient.contract.get_available_resources") + @mock.patch("uaclient.status.get_available_resources") @mock.patch( "uaclient.config.UAConfig.is_attached", new_callable=mock.PropertyMock ) @@ -206,7 +207,7 @@ fake_stdout = io.StringIO() with mock.patch( - "uaclient.entitlements.entitlement_factory", + "uaclient.status.entitlement_factory", return_value=m_entitlement_cls, ): with contextlib.redirect_stdout(fake_stdout): @@ -231,7 +232,7 @@ ), ) @pytest.mark.parametrize("is_beta", (True, False)) - @mock.patch("uaclient.contract.get_available_resources") + @mock.patch("uaclient.status.get_available_resources") @mock.patch( "uaclient.config.UAConfig.is_attached", new_callable=mock.PropertyMock ) @@ -291,7 +292,7 @@ fake_stdout = io.StringIO() with mock.patch( - "uaclient.entitlements.entitlement_factory", + "uaclient.status.entitlement_factory", return_value=m_entitlement_cls, ): with contextlib.redirect_stdout(fake_stdout): @@ -310,7 +311,7 @@ ufs_call_count == m_entitlement_obj.user_facing_status.call_count ) - @mock.patch("uaclient.contract.get_available_resources") + @mock.patch("uaclient.status.get_available_resources") def test_help_command_for_invalid_service(self, m_available_resources): """Test help command when an invalid service is provided.""" m_args = mock.MagicMock() @@ -668,8 +669,11 @@ assert "NOT_UA_ENV=YES" not in log @mock.patch("uaclient.cli.contract.get_available_resources") - def test_argparse_errors_well_formatted(self, _m_resources, capsys): - parser = get_parser() + def test_argparse_errors_well_formatted( + self, _m_resources, capsys, FakeConfig + ): + cfg = FakeConfig() + parser = get_parser(cfg) with mock.patch("sys.argv", ["ua", "enable"]): with pytest.raises(SystemExit) as excinfo: parser.parse_args() @@ -855,13 +859,13 @@ "uaclient.cli.entitlements.valid_services", return_value=["ent1", "ent2", "ent3"], ) - def test_get_valid_entitlements(self, _m_valid_services): + def test_get_valid_entitlements(self, _m_valid_services, FakeConfig): service = ["ent1", "ent3", "ent4"] expected_ents_found = ["ent1", "ent3"] expected_ents_not_found = ["ent4"] actual_ents_found, actual_ents_not_found = get_valid_entitlement_names( - service + service, cfg=FakeConfig() ) assert expected_ents_found == actual_ents_found diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_refresh.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_refresh.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_refresh.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_refresh.py 2022-05-18 19:44:15.000000000 +0000 @@ -10,15 +10,18 @@ Refresh existing Ubuntu Advantage contract and update services. positional arguments: - {contract,config} Target to refresh. `ua refresh contract` will update - contract details from the server and perform any updates - necessary. `ua refresh config` will reload /etc/ubuntu- - advantage/uaclient.conf and perform any changes - necessary. `ua refresh` is the equivalent of `ua refresh - config && ua refresh contract`. + {contract,config,messages} + Target to refresh. `ua refresh contract` will update + contract details from the server and perform any + updates necessary. `ua refresh config` will reload + /etc/ubuntu-advantage/uaclient.conf and perform any + changes necessary. `ua refresh messages` will refresh + the APT and MOTD messages associated with UA. `ua + refresh` is the equivalent of `ua refresh config && ua + refresh contract && ua refresh motd`. Flags: - -h, --help show this help message and exit + -h, --help show this help message and exit """ @@ -30,7 +33,8 @@ with mock.patch("sys.argv", ["/usr/bin/ua", "refresh", "--help"]): main() out, _err = capsys.readouterr() - assert HELP_OUTPUT == out + print(out) + assert HELP_OUTPUT in out def test_non_root_users_are_rejected(self, getuid, FakeConfig): """Check that a UID != 0 will receive a message and exit non-zero""" @@ -77,8 +81,14 @@ @mock.patch("logging.exception") @mock.patch("uaclient.contract.request_updated_contract") + @mock.patch("uaclient.cli.config.UAConfig.remove_notice") def test_refresh_contract_error_on_failure_to_update_contract( - self, request_updated_contract, logging_error, getuid, FakeConfig + self, + m_remove_notice, + request_updated_contract, + logging_error, + getuid, + FakeConfig, ): """On failure in request_updates_contract emit an error.""" request_updated_contract.side_effect = exceptions.UrlError( @@ -91,10 +101,19 @@ action_refresh(mock.MagicMock(target="contract"), cfg=cfg) assert messages.REFRESH_CONTRACT_FAILURE == excinfo.value.msg + assert [ + mock.call("", messages.NOTICE_REFRESH_CONTRACT_WARNING) + ] != m_remove_notice.call_args_list @mock.patch("uaclient.contract.request_updated_contract") + @mock.patch("uaclient.cli.config.UAConfig.remove_notice") def test_refresh_contract_happy_path( - self, request_updated_contract, getuid, capsys, FakeConfig + self, + m_remove_notice, + request_updated_contract, + getuid, + capsys, + FakeConfig, ): """On success from request_updates_contract root user can refresh.""" request_updated_contract.return_value = True @@ -105,6 +124,77 @@ assert 0 == ret assert messages.REFRESH_CONTRACT_SUCCESS in capsys.readouterr()[0] assert [mock.call(cfg)] == request_updated_contract.call_args_list + assert [ + mock.call("", messages.NOTICE_REFRESH_CONTRACT_WARNING), + mock.call("", "Operation in progress.*"), + ] == m_remove_notice.call_args_list + + @mock.patch("uaclient.cli.update_apt_and_motd_messages") + def test_refresh_messages_error(self, m_update_motd, getuid, FakeConfig): + """On failure in update_apt_and_motd_messages emit an error.""" + m_update_motd.side_effect = Exception("test") + + with pytest.raises(exceptions.UserFacingError) as excinfo: + action_refresh(mock.MagicMock(target="messages"), cfg=FakeConfig()) + + assert messages.REFRESH_MESSAGES_FAILURE == excinfo.value.msg + + @mock.patch("uaclient.jobs.update_messaging.exists", return_value=True) + @mock.patch("logging.exception") + @mock.patch("uaclient.util.subp") + @mock.patch("uaclient.cli.update_apt_and_motd_messages") + def test_refresh_messages_doesnt_fail_if_update_notifier_does( + self, + m_update_motd, + m_subp, + logging_error, + _m_path, + getuid, + capsys, + FakeConfig, + ): + subp_exc = Exception("test") + m_subp.side_effect = [subp_exc, ""] + + ret = action_refresh( + mock.MagicMock(target="messages"), cfg=FakeConfig() + ) + + assert 0 == ret + assert 1 == logging_error.call_count + assert [mock.call(subp_exc)] == logging_error.call_args_list + assert messages.REFRESH_MESSAGES_SUCCESS in capsys.readouterr()[0] + + @mock.patch("uaclient.jobs.update_messaging.exists", return_value=True) + @mock.patch("logging.exception") + @mock.patch("uaclient.util.subp") + @mock.patch("uaclient.cli.update_apt_and_motd_messages") + def test_refresh_messages_systemctl_error( + self, m_update_motd, m_subp, logging_error, _m_path, getuid, FakeConfig + ): + subp_exc = Exception("test") + m_subp.side_effect = ["", subp_exc] + + with pytest.raises(exceptions.UserFacingError) as excinfo: + action_refresh(mock.MagicMock(target="messages"), cfg=FakeConfig()) + + assert 1 == logging_error.call_count + assert [mock.call(subp_exc)] == logging_error.call_args_list + assert messages.REFRESH_MESSAGES_FAILURE == excinfo.value.msg + + @mock.patch("uaclient.cli.refresh_motd") + @mock.patch("uaclient.cli.update_apt_and_motd_messages") + def test_refresh_messages_happy_path( + self, m_update_motd, m_refresh_motd, getuid, capsys, FakeConfig + ): + """On success from request_updates_contract root user can refresh.""" + cfg = FakeConfig() + ret = action_refresh(mock.MagicMock(target="messages"), cfg=cfg) + + assert 0 == ret + assert messages.REFRESH_MESSAGES_SUCCESS in capsys.readouterr()[0] + assert [mock.call(cfg)] == m_update_motd.call_args_list + assert [mock.call()] == m_refresh_motd.call_args_list @mock.patch("logging.exception") @mock.patch( @@ -137,8 +227,10 @@ @mock.patch("uaclient.contract.request_updated_contract") @mock.patch("uaclient.config.UAConfig.process_config") + @mock.patch("uaclient.cli.config.UAConfig.remove_notice") def test_refresh_all_happy_path( self, + m_remove_notice, m_process_config, m_request_updated_contract, getuid, @@ -156,3 +248,7 @@ assert messages.REFRESH_CONTRACT_SUCCESS in out assert [mock.call()] == m_process_config.call_args_list assert [mock.call(cfg)] == m_request_updated_contract.call_args_list + assert [ + mock.call("", messages.NOTICE_REFRESH_CONTRACT_WARNING), + mock.call("", "Operation in progress.*"), + ] == m_remove_notice.call_args_list diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_security_status.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_security_status.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_security_status.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_security_status.py 2022-05-18 19:44:15.000000000 +0000 @@ -17,8 +17,19 @@ """\ usage: security-status \[-h\] --format {json,yaml} -Show security updates for packages in the system, including all available ESM -related content. +Show security updates for packages in the system, including all +available ESM related content. + +Besides the list of security updates, it also shows a summary of the +installed packages based on the origin. +- main/restricted/universe/multiverse: packages from the Ubuntu archive +- ESM Infra/Apps: packages from ESM +- third-party: packages installed from non-Ubuntu sources +- unknown: packages which don't have an installation source \(like local + deb packages or packages for which the source was removed\) + +The summary contains basic information about UA and ESM. For a complete +status on UA services, run 'ua status' (optional arguments|options): -h, --help show this help message and exit @@ -105,12 +116,14 @@ class TestParser: @mock.patch(M_PATH + "contract.get_available_resources") - def test_security_status_parser_updates_parser_config(self, _m_resources): + def test_security_status_parser_updates_parser_config( + self, _m_resources, FakeConfig + ): """Update the parser configuration for 'security-status'.""" m_parser = security_status_parser(mock.Mock()) assert "security-status" == m_parser.prog - full_parser = get_parser() + full_parser = get_parser(FakeConfig()) with mock.patch( "sys.argv", ["ua", "security-status", "--format", "json"] ): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_status.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_status.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_cli_status.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_cli_status.py 2022-05-18 19:44:15.000000000 +0000 @@ -14,6 +14,7 @@ from uaclient import exceptions, messages, status, version from uaclient.cli import action_status, get_parser, main, status_parser from uaclient.event_logger import EventLoggerMode +from uaclient.tests.test_cli import M_PATH_UACONFIG M_PATH = "uaclient.cli." @@ -259,14 +260,15 @@ ) +@mock.patch("uaclient.cli.contract.is_contract_changed", return_value=False) @mock.patch("uaclient.config.UAConfig.remove_notice") @mock.patch("uaclient.util.should_reboot", return_value=False) @mock.patch( - M_PATH + "contract.get_available_resources", + "uaclient.status.get_available_resources", return_value=RESPONSE_AVAILABLE_SERVICES, ) @mock.patch( - M_PATH + "contract.get_contract_information", + "uaclient.status.get_contract_information", return_value=RESPONSE_CONTRACT_INFO, ) @mock.patch(M_PATH + "os.getuid", return_value=0) @@ -278,6 +280,7 @@ _m_get_available_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, capsys, ): with pytest.raises(SystemExit): @@ -304,6 +307,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, notices, notice_status, use_all, @@ -342,6 +346,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, capsys, FakeConfig, ): @@ -360,6 +365,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, capsys, FakeConfig, ): @@ -385,6 +391,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, capsys, FakeConfig, ): @@ -429,6 +436,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, use_all, environ, format_type, @@ -538,6 +546,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, use_all, environ, format_type, @@ -682,6 +691,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, use_all, format_type, event_logger_mode, @@ -813,6 +823,7 @@ m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, FakeConfig, ): """Raise UrlError on connectivity issues""" @@ -838,6 +849,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, encoding, expected_dash, FakeConfig, @@ -888,6 +900,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, exception_to_throw, exception_type, exception_message, @@ -938,6 +951,7 @@ _m_get_avail_resources, _m_should_reboot, _m_remove_notice, + _m_contract_changed, format_type, event_logger_mode, token_to_use, @@ -975,15 +989,63 @@ assert output["errors"][0]["message"] == warning_message + @pytest.mark.parametrize( + "contract_changed,is_attached", + ( + (False, True), + (True, False), + (True, True), + (False, False), + ), + ) + @mock.patch(M_PATH_UACONFIG + "add_notice") + def test_is_contract_changed( + self, + m_add_notice, + _m_getuid, + _m_get_contract_information, + _m_get_available_resources, + _m_should_reboot, + _m_remove_notice, + _m_contract_changed, + contract_changed, + is_attached, + capsys, + FakeConfig, + ): + _m_contract_changed.return_value = contract_changed + if is_attached: + cfg = FakeConfig().for_attached_machine() + else: + cfg = FakeConfig() + + action_status( + mock.MagicMock(all=False, simulate_with_token=None), cfg=cfg + ) + + if is_attached: + if contract_changed: + assert [ + mock.call("", messages.NOTICE_REFRESH_CONTRACT_WARNING) + ] == m_add_notice.call_args_list + else: + assert [ + mock.call("", messages.NOTICE_REFRESH_CONTRACT_WARNING) + ] not in m_add_notice.call_args_list + else: + assert _m_contract_changed.call_count == 0 + class TestStatusParser: @mock.patch(M_PATH + "contract.get_available_resources") - def test_status_parser_updates_parser_config(self, _m_resources): + def test_status_parser_updates_parser_config( + self, _m_resources, FakeConfig + ): """Update the parser configuration for 'status'.""" m_parser = status_parser(mock.Mock()) assert "status" == m_parser.prog - full_parser = get_parser() + full_parser = get_parser(FakeConfig()) with mock.patch( "sys.argv", [ diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_config.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_config.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_config.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_config.py 2022-05-18 19:44:15.000000000 +0000 @@ -10,9 +10,8 @@ import pytest import yaml -from uaclient import entitlements, exceptions, messages, status, util, version +from uaclient import apt, exceptions, messages, util from uaclient.config import ( - DEFAULT_STATUS, PRIVATE_SUBDIR, UA_CONFIGURABLE_KEYS, VALID_UA_CONFIG_KEYS, @@ -22,21 +21,9 @@ get_config_path, parse_config, ) -from uaclient.defaults import CONFIG_DEFAULTS, DEFAULT_CONFIG_FILE -from uaclient.entitlements import ( - ENTITLEMENT_CLASSES, - entitlement_factory, - valid_services, -) -from uaclient.entitlements.base import IncompatibleService -from uaclient.entitlements.fips import FIPSEntitlement -from uaclient.entitlements.ros import ROSEntitlement -from uaclient.entitlements.tests.test_base import ConcreteTestEntitlement -from uaclient.status import ( - ContractStatus, - UserFacingConfigStatus, - UserFacingStatus, -) +from uaclient.defaults import DEFAULT_CONFIG_FILE +from uaclient.entitlements import valid_services +from uaclient.entitlements.entitlement_status import ApplicationStatus KNOWN_DATA_PATHS = ( ("machine-access-cis", "machine-access-cis.json"), @@ -44,26 +31,21 @@ ) M_PATH = "uaclient.entitlements." -DEFAULT_CFG_STATUS = { - "execution_status": DEFAULT_STATUS["execution_status"], - "execution_details": DEFAULT_STATUS["execution_details"], -} - ALL_RESOURCES_AVAILABLE = [ {"name": name, "available": True} - for name in valid_services(allow_beta=True) + for name in valid_services(cfg=UAConfig(), allow_beta=True) ] ALL_RESOURCES_ENTITLED = [ {"type": name, "entitled": True} - for name in valid_services(allow_beta=True) + for name in valid_services(cfg=UAConfig(), allow_beta=True) ] NO_RESOURCES_ENTITLED = [ {"type": name, "entitled": False} - for name in valid_services(allow_beta=True) + for name in valid_services(cfg=UAConfig(), allow_beta=True) ] RESP_ONLY_FIPS_RESOURCE_AVAILABLE = [ {"name": name, "available": name == "fips"} - for name in valid_services(allow_beta=True) + for name in valid_services(cfg=UAConfig(), allow_beta=True) ] @@ -234,8 +216,8 @@ # picked up by Ubuntu-Advantage client. contract_url: https://contracts.canonical.com +daemon_log_file: /var/log/ubuntu-advantage-daemon.log data_dir: /var/lib/ubuntu-advantage -license_check_log_file: /var/log/ubuntu-advantage-license-check.log log_file: /var/log/ubuntu-advantage.log log_level: debug security_url: https://ubuntu.com/security @@ -248,13 +230,13 @@ # picked up by Ubuntu-Advantage client. contract_url: https://contracts.canonical.com +daemon_log_file: /var/log/ubuntu-advantage-daemon.log data_dir: /var/lib/ubuntu-advantage features: extra_security_params: hide: true new: 2 show_beta: true -license_check_log_file: /var/log/ubuntu-advantage-license-check.log log_file: /var/log/ubuntu-advantage.log log_level: debug security_url: https://ubuntu.com/security @@ -268,6 +250,10 @@ "ua_config": { "apt_http_proxy": None, "apt_https_proxy": None, + "global_apt_http_proxy": None, + "global_apt_https_proxy": None, + "ua_apt_http_proxy": None, + "ua_apt_https_proxy": None, "http_proxy": None, "https_proxy": None, "update_messaging_timer": None, @@ -285,28 +271,27 @@ ): """Getters and settings are available fo UA_CONFIGURABLE_KEYS.""" cfg = UAConfig({"data_dir": tmpdir.strpath}) - assert None is getattr(cfg, attr_name) - setattr(cfg, attr_name, attr_name + "value") - assert attr_name + "value" == getattr(cfg, attr_name) - assert attr_name + "value" == cfg.cfg["ua_config"][attr_name] + assert None is getattr(cfg, attr_name, None) + cfg_non_members = ("apt_http_proxy", "apt_https_proxy") + if attr_name not in cfg_non_members: + setattr(cfg, attr_name, attr_name + "value") + assert attr_name + "value" == getattr(cfg, attr_name) + assert attr_name + "value" == cfg.cfg["ua_config"][attr_name] class TestWriteCfg: - @pytest.mark.parametrize("caplog_text", [logging.WARNING], indirect=True) @pytest.mark.parametrize( - "orig_content, expected, warnings", + "orig_content, expected", ( ( CFG_BASE_CONTENT, CFG_BASE_CONTENT + yaml.dump(UA_CFG_DICT, default_flow_style=False), - [], ), ( # Yaml output is sorted alphabetically by key "\n".join(sorted(CFG_BASE_CONTENT.splitlines(), reverse=True)), CFG_BASE_CONTENT + yaml.dump(UA_CFG_DICT, default_flow_style=False), - [], ), # Any custom comments or unrecognized config keys are dropped ( @@ -314,10 +299,6 @@ + CFG_BASE_CONTENT, CFG_BASE_CONTENT + yaml.dump(UA_CFG_DICT, default_flow_style=False), - [ - "Ignoring invalid uaclient.conf key:" - " unknown-keys-not-preserved=True" - ], ), # All features/settings_overrides ordered after ua_config ( @@ -326,7 +307,6 @@ " show_beta: true\nsettings_overrides:\n d: 2\n c: 1\n", CFG_FEATURES_CONTENT + yaml.dump(UA_CFG_DICT, default_flow_style=False), - [], ), ( "settings_overrides:\n c: 1\n d: 2\nfeatures:\n" @@ -335,23 +315,19 @@ + CFG_BASE_CONTENT, CFG_FEATURES_CONTENT + yaml.dump(UA_CFG_DICT, default_flow_style=False), - [], ), ), ) def test_write_cfg_reads_cfg_andpersists_structured_content_to_config_path( - self, orig_content, warnings, expected, caplog_text, tmpdir + self, orig_content, expected, tmpdir ): """write_cfg writes structured, ordered config YAML to config_path.""" orig_conf = tmpdir.join("orig_uaclient.conf") orig_conf.write(orig_content) - cfg = UAConfig(cfg=parse_config(orig_conf.strpath)) + cfg = UAConfig(cfg=parse_config(orig_conf.strpath)[0]) out_conf = tmpdir.join("uaclient.conf") cfg.write_cfg(out_conf.strpath) assert expected == out_conf.read() - warn_logs = caplog_text() - for warning in warnings: - assert warning in warn_logs class TestWriteCache: @@ -702,692 +678,132 @@ ) -@mock.patch("uaclient.config.UAConfig.remove_notice") -@mock.patch("uaclient.util.should_reboot", return_value=False) -class TestStatus: - esm_desc = entitlement_factory("esm-infra").description - ros_desc = entitlement_factory("ros").description - - def check_beta(self, cls, show_beta, uacfg=None, status=""): - if not show_beta: - if status == "enabled": - return False - - if uacfg: - allow_beta = uacfg.cfg.get("features", {}).get( - "allow_beta", False - ) - - if allow_beta: - return False - - return cls.is_beta - - return False - +class TestProcessConfig: @pytest.mark.parametrize( - "show_beta,expected_services", - ( + "http_proxy, https_proxy, snap_is_installed, snap_http_val, " + "snap_https_val, livepatch_enabled, livepatch_http_val, " + "livepatch_https_val, snap_livepatch_msg, " + "global_https, global_http, ua_https, ua_http, apt_https, apt_http", + [ ( - True, - [ - { - "available": "yes", - "name": "esm-infra", - "description": esm_desc, - }, - { - "available": "no", - "name": "ros", - "description": ros_desc, - }, - ], + "http", + "https", + False, + None, + None, + False, + None, + None, + "", + None, + None, + None, + None, + None, + None, ), ( + "http", + "https", + True, + None, + None, False, - [ - { - "available": "yes", - "name": "esm-infra", - "description": esm_desc, - } - ], - ), - ), - ) - @mock.patch("uaclient.contract.get_available_resources") - @mock.patch("uaclient.config.os.getuid", return_value=0) - def test_root_unattached( - self, - _m_getuid, - m_get_available_resources, - _m_should_reboot, - m_remove_notice, - show_beta, - expected_services, - FakeConfig, - ): - """Test we get the correct status dict when unattached""" - cfg = FakeConfig() - m_get_available_resources.return_value = [ - {"name": "esm-infra", "available": True}, - {"name": "ros", "available": False}, - ] - expected = copy.deepcopy(DEFAULT_STATUS) - expected["version"] = mock.ANY - expected["services"] = expected_services - with mock.patch( - "uaclient.config.UAConfig._get_config_status" - ) as m_get_cfg_status: - m_get_cfg_status.return_value = DEFAULT_CFG_STATUS - assert expected == cfg.status(show_beta=show_beta) - - expected_calls = [ - mock.call( - "", - messages.ENABLE_REBOOT_REQUIRED_TMPL.format( - operation="fix operation" - ), - ) - ] - - assert expected_calls == m_remove_notice.call_args_list - - @pytest.mark.parametrize("show_beta", (True, False)) - @pytest.mark.parametrize( - "features_override", ((None), ({"allow_beta": True})) - ) - @pytest.mark.parametrize( - "avail_res,entitled_res,uf_entitled,uf_status", - ( - ( # Empty lists means UNENTITLED and UNAVAILABLE - [], - [], - status.ContractStatus.UNENTITLED.value, - status.UserFacingStatus.UNAVAILABLE.value, - ), - ( # available == False means UNAVAILABLE - [{"name": "livepatch", "available": False}], - [], - status.ContractStatus.UNENTITLED.value, - status.UserFacingStatus.UNAVAILABLE.value, - ), - ( # available == True but unentitled means UNAVAILABLE - [{"name": "livepatch", "available": True}], - [], - status.ContractStatus.UNENTITLED.value, - status.UserFacingStatus.UNAVAILABLE.value, + None, + None, + "", + None, + None, + None, + None, + "apt_https", + "apt_http", ), - ( # available == False and entitled means INAPPLICABLE - [{"name": "livepatch", "available": False}], - [{"type": "livepatch", "entitled": True}], - status.ContractStatus.ENTITLED.value, - status.UserFacingStatus.INAPPLICABLE.value, + ( + "http", + "https", + False, + None, + None, + True, + None, + None, + "", + "global_https", + "global_http", + None, + None, + None, + None, ), - ), - ) - @mock.patch( - M_PATH + "livepatch.LivepatchEntitlement.application_status", - return_value=(status.ApplicationStatus.DISABLED, ""), - ) - @mock.patch("uaclient.contract.get_available_resources") - @mock.patch("uaclient.config.os.getuid", return_value=0) - def test_root_attached( - self, - _m_getuid, - m_get_avail_resources, - _m_livepatch_status, - _m_should_reboot, - _m_remove_notice, - avail_res, - entitled_res, - uf_entitled, - uf_status, - features_override, - show_beta, - FakeConfig, - ): - """Test we get the correct status dict when attached with basic conf""" - resource_names = [resource["name"] for resource in avail_res] - default_entitled = status.ContractStatus.UNENTITLED.value - default_status = status.UserFacingStatus.UNAVAILABLE.value - token = { - "availableResources": [], - "machineTokenInfo": { - "machineId": "test_machine_id", - "accountInfo": { - "id": "acct-1", - "name": "test_account", - "createdAt": "2019-06-14T06:45:50Z", - "externalAccountIDs": [{"IDs": ["id1"], "Origin": "AWS"}], - }, - "contractInfo": { - "id": "cid", - "name": "test_contract", - "createdAt": "2020-05-08T19:02:26Z", - "effectiveFrom": "2000-05-08T19:02:26Z", - "effectiveTo": "2040-05-08T19:02:26Z", - "resourceEntitlements": entitled_res, - "products": ["free"], - }, - }, - } - - available_resource_response = [ - { - "name": cls.name, - "available": bool( - {"name": cls.name, "available": True} in avail_res - ), - } - for cls in entitlements.ENTITLEMENT_CLASSES - ] - if avail_res: - token["availableResources"] = available_resource_response - else: - m_get_avail_resources.return_value = available_resource_response - - cfg = FakeConfig.for_attached_machine(machine_token=token) - if features_override: - cfg.override_features(features_override) - - expected_services = [ - { - "description": cls.description, - "entitled": uf_entitled - if cls.name in resource_names - else default_entitled, - "name": cls.name, - "status": uf_status - if cls.name in resource_names - else default_status, - "status_details": mock.ANY, - "description_override": None, - "available": mock.ANY, - "blocked_by": [], - } - for cls in entitlements.ENTITLEMENT_CLASSES - if not self.check_beta(cls, show_beta, cfg) - ] - expected = copy.deepcopy(DEFAULT_STATUS) - expected.update( - { - "version": version.get_version(features=cfg.features), - "attached": True, - "machine_id": "test_machine_id", - "services": expected_services, - "effective": datetime.datetime( - 2000, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc - ), - "expires": datetime.datetime( - 2040, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc - ), - "contract": { - "name": "test_contract", - "id": "cid", - "created_at": datetime.datetime( - 2020, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc - ), - "products": ["free"], - "tech_support_level": "n/a", - }, - "account": { - "name": "test_account", - "id": "acct-1", - "created_at": datetime.datetime( - 2019, 6, 14, 6, 45, 50, tzinfo=datetime.timezone.utc - ), - "external_account_ids": [ - {"IDs": ["id1"], "Origin": "AWS"} - ], - }, - } - ) - with mock.patch( - "uaclient.config.UAConfig._get_config_status" - ) as m_get_cfg_status: - m_get_cfg_status.return_value = DEFAULT_CFG_STATUS - assert expected == cfg.status(show_beta=show_beta) - if avail_res: - assert m_get_avail_resources.call_count == 0 - else: - assert m_get_avail_resources.call_count == 1 - # cfg.status() idempotent - with mock.patch( - "uaclient.config.UAConfig._get_config_status" - ) as m_get_cfg_status: - m_get_cfg_status.return_value = DEFAULT_CFG_STATUS - assert expected == cfg.status(show_beta=show_beta) - - @mock.patch("uaclient.contract.get_available_resources") - @mock.patch("uaclient.config.os.getuid") - def test_nonroot_unattached_is_same_as_unattached_root( - self, - m_getuid, - m_get_available_resources, - _m_should_reboot, - _m_remove_notice, - FakeConfig, - ): - m_get_available_resources.return_value = [ - {"name": "esm-infra", "available": True} - ] - m_getuid.return_value = 1000 - cfg = FakeConfig() - nonroot_status = cfg.status() - - m_getuid.return_value = 0 - root_unattached_status = cfg.status() - - assert root_unattached_status == nonroot_status - - @mock.patch("uaclient.contract.get_available_resources") - @mock.patch("uaclient.config.os.getuid") - def test_root_followed_by_nonroot( - self, - m_getuid, - m_get_available_resources, - _m_should_reboot, - _m_remove_notice, - tmpdir, - FakeConfig, - ): - """Ensure that non-root run after root returns data""" - cfg = UAConfig({"data_dir": tmpdir.strpath}) - - # Run as root - m_getuid.return_value = 0 - before = copy.deepcopy(cfg.status()) - - # Replicate an attach by modifying the underlying config and confirm - # that we see different status - other_cfg = FakeConfig.for_attached_machine() - cfg.write_cache("accounts", {"accounts": other_cfg.accounts}) - cfg.write_cache("machine-token", other_cfg.machine_token) - assert cfg._attached_status() != before - - # Run as regular user and confirm that we see the result from - # last time we called .status() - m_getuid.return_value = 1000 - after = cfg.status() - - assert before == after - - @mock.patch("uaclient.contract.get_available_resources", return_value=[]) - @mock.patch("uaclient.config.os.getuid", return_value=0) - def test_cache_file_is_written_world_readable( - self, - _m_getuid, - _m_get_available_resources, - _m_should_reboot, - m_remove_notice, - tmpdir, - ): - cfg = UAConfig({"data_dir": tmpdir.strpath}) - cfg.status() - - assert 0o644 == stat.S_IMODE( - os.lstat(cfg.data_path("status-cache")).st_mode - ) - - expected_calls = [ - mock.call( + ( + "http", + "https", + True, + None, + None, + True, + None, + None, "", - messages.ENABLE_REBOOT_REQUIRED_TMPL.format( - operation="fix operation" - ), - ) - ] - - assert expected_calls == m_remove_notice.call_args_list - - @pytest.mark.parametrize("show_beta", (True, False)) - @pytest.mark.parametrize( - "features_override", ((None), ({"allow_beta": False})) - ) - @pytest.mark.parametrize( - "entitlements", - ( - [], - [ - { - "type": "support", - "entitled": True, - "affordances": {"supportLevel": "anything"}, - } - ], - ), - ) - @mock.patch("uaclient.config.os.getuid", return_value=0) - @mock.patch( - M_PATH + "fips.FIPSCommonEntitlement.application_status", - return_value=(status.ApplicationStatus.DISABLED, ""), - ) - @mock.patch( - M_PATH + "livepatch.LivepatchEntitlement.application_status", - return_value=(status.ApplicationStatus.DISABLED, ""), - ) - @mock.patch(M_PATH + "livepatch.LivepatchEntitlement.user_facing_status") - @mock.patch(M_PATH + "livepatch.LivepatchEntitlement.contract_status") - @mock.patch(M_PATH + "esm.ESMAppsEntitlement.user_facing_status") - @mock.patch(M_PATH + "esm.ESMAppsEntitlement.contract_status") - @mock.patch(M_PATH + "repo.RepoEntitlement.user_facing_status") - @mock.patch(M_PATH + "repo.RepoEntitlement.contract_status") - def test_attached_reports_contract_and_service_status( - self, - m_repo_contract_status, - m_repo_uf_status, - m_esm_contract_status, - m_esm_uf_status, - m_livepatch_contract_status, - m_livepatch_uf_status, - _m_livepatch_status, - _m_fips_status, - _m_getuid, - _m_should_reboot, - m_remove_notice, - entitlements, - features_override, - show_beta, - FakeConfig, - ): - """When attached, return contract and service user-facing status.""" - m_repo_contract_status.return_value = status.ContractStatus.ENTITLED - m_repo_uf_status.return_value = ( - status.UserFacingStatus.INAPPLICABLE, - messages.NamedMessage("test-code", "repo details"), - ) - m_livepatch_contract_status.return_value = ( - status.ContractStatus.ENTITLED - ) - m_livepatch_uf_status.return_value = ( - status.UserFacingStatus.ACTIVE, - messages.NamedMessage("test-code", "livepatch details"), - ) - m_esm_contract_status.return_value = status.ContractStatus.ENTITLED - m_esm_uf_status.return_value = ( - status.UserFacingStatus.ACTIVE, - messages.NamedMessage("test-code", "esm-apps details"), - ) - token = { - "availableResources": ALL_RESOURCES_AVAILABLE, - "machineTokenInfo": { - "machineId": "test_machine_id", - "accountInfo": { - "id": "1", - "name": "accountname", - "createdAt": "2019-06-14T06:45:50Z", - "externalAccountIDs": [{"IDs": ["id1"], "Origin": "AWS"}], - }, - "contractInfo": { - "id": "contract-1", - "name": "contractname", - "createdAt": "2020-05-08T19:02:26Z", - "resourceEntitlements": entitlements, - "products": ["free"], - }, - }, - } - cfg = FakeConfig.for_attached_machine( - account_name="accountname", machine_token=token - ) - if features_override: - cfg.override_features(features_override) - if not entitlements: - support_level = status.UserFacingStatus.INAPPLICABLE.value - else: - support_level = entitlements[0]["affordances"]["supportLevel"] - expected = copy.deepcopy(DEFAULT_STATUS) - expected.update( - { - "version": version.get_version(features=cfg.features), - "attached": True, - "machine_id": "test_machine_id", - "contract": { - "name": "contractname", - "id": "contract-1", - "created_at": datetime.datetime( - 2020, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc - ), - "products": ["free"], - "tech_support_level": support_level, - }, - "account": { - "name": "accountname", - "id": "1", - "created_at": datetime.datetime( - 2019, 6, 14, 6, 45, 50, tzinfo=datetime.timezone.utc - ), - "external_account_ids": [ - {"IDs": ["id1"], "Origin": "AWS"} - ], - }, - } - ) - for cls in ENTITLEMENT_CLASSES: - if cls.name == "livepatch": - expected_status = status.UserFacingStatus.ACTIVE.value - details = "livepatch details" - elif cls.name == "esm-apps": - expected_status = status.UserFacingStatus.ACTIVE.value - details = "esm-apps details" - else: - expected_status = status.UserFacingStatus.INAPPLICABLE.value - details = "repo details" - - if self.check_beta(cls, show_beta, cfg, expected_status): - continue - - expected["services"].append( - { - "name": cls.name, - "description": cls.description, - "entitled": status.ContractStatus.ENTITLED.value, - "status": expected_status, - "status_details": details, - "description_override": None, - "available": mock.ANY, - "blocked_by": [], - } - ) - with mock.patch( - "uaclient.config.UAConfig._get_config_status" - ) as m_get_cfg_status: - m_get_cfg_status.return_value = DEFAULT_CFG_STATUS - assert expected == cfg.status(show_beta=show_beta) - - assert len(ENTITLEMENT_CLASSES) - 2 == m_repo_uf_status.call_count - assert 1 == m_livepatch_uf_status.call_count - - expected_calls = [ - mock.call( + None, + None, + "ua_https", + "ua_http", + None, + None, + ), + ( + None, + None, + True, + None, + None, + True, + None, + None, "", - messages.ENABLE_REBOOT_REQUIRED_TMPL.format( - operation="fix operation" - ), - ) - ] - - assert expected_calls == m_remove_notice.call_args_list - - @mock.patch("uaclient.contract.get_available_resources") - @mock.patch("uaclient.config.os.getuid") - def test_expires_handled_appropriately( - self, - m_getuid, - _m_get_available_resources, - _m_should_reboot, - _m_remove_notice, - FakeConfig, - ): - token = { - "availableResources": ALL_RESOURCES_AVAILABLE, - "machineTokenInfo": { - "machineId": "test_machine_id", - "accountInfo": {"id": "1", "name": "accountname"}, - "contractInfo": { - "name": "contractname", - "id": "contract-1", - "effectiveTo": "2020-07-18T00:00:00Z", - "createdAt": "2020-05-08T19:02:26Z", - "resourceEntitlements": [], - "products": ["free"], - }, - }, - } - cfg = FakeConfig.for_attached_machine( - account_name="accountname", machine_token=token - ) - - # Test that root's status works as expected (including the cache write) - m_getuid.return_value = 0 - expected_dt = datetime.datetime( - 2020, 7, 18, 0, 0, 0, tzinfo=datetime.timezone.utc - ) - assert expected_dt == cfg.status()["expires"] - - # Test that the read from the status cache work properly for non-root - # users - m_getuid.return_value = 1000 - assert expected_dt == cfg.status()["expires"] - - @mock.patch("uaclient.config.os.getuid") - def test_nonroot_user_uses_cache_and_updates_if_available( - self, m_getuid, _m_should_reboot, m_remove_notice, tmpdir - ): - m_getuid.return_value = 1000 - - expected_status = {"pass": True} - cfg = UAConfig({"data_dir": tmpdir.strpath}) - cfg.write_cache("marker-reboot-cmds", "") # To indicate a reboot reqd - cfg.write_cache("status-cache", expected_status) - - # Even non-root users can update execution_status details - details = messages.ENABLE_REBOOT_REQUIRED_TMPL.format( - operation="configuration changes" - ) - reboot_required = UserFacingConfigStatus.REBOOTREQUIRED.value - expected_status.update( - { - "execution_status": reboot_required, - "execution_details": details, - "notices": [], - "config_path": None, - "config": {"data_dir": mock.ANY}, - } - ) - - assert expected_status == cfg.status() - - -ATTACHED_SERVICE_STATUS_PARAMETERS = [ - # ENTITLED => display the given user-facing status - (ContractStatus.ENTITLED, UserFacingStatus.ACTIVE, False, "enabled"), - (ContractStatus.ENTITLED, UserFacingStatus.INACTIVE, False, "disabled"), - (ContractStatus.ENTITLED, UserFacingStatus.INAPPLICABLE, False, "n/a"), - (ContractStatus.ENTITLED, UserFacingStatus.UNAVAILABLE, False, "—"), - # UNENTITLED => UNAVAILABLE - (ContractStatus.UNENTITLED, UserFacingStatus.ACTIVE, False, "—"), - (ContractStatus.UNENTITLED, UserFacingStatus.INACTIVE, False, "—"), - (ContractStatus.UNENTITLED, UserFacingStatus.INAPPLICABLE, False, "—"), - (ContractStatus.UNENTITLED, UserFacingStatus.UNAVAILABLE, [], "—"), - # ENTITLED but in unavailable_resources => INAPPLICABLE - (ContractStatus.ENTITLED, UserFacingStatus.ACTIVE, True, "n/a"), - (ContractStatus.ENTITLED, UserFacingStatus.INACTIVE, True, "n/a"), - (ContractStatus.ENTITLED, UserFacingStatus.INAPPLICABLE, True, "n/a"), - (ContractStatus.ENTITLED, UserFacingStatus.UNAVAILABLE, True, "n/a"), - # UNENTITLED and in unavailable_resources => UNAVAILABLE - (ContractStatus.UNENTITLED, UserFacingStatus.ACTIVE, True, "—"), - (ContractStatus.UNENTITLED, UserFacingStatus.INACTIVE, True, "—"), - (ContractStatus.UNENTITLED, UserFacingStatus.INAPPLICABLE, True, "—"), - (ContractStatus.UNENTITLED, UserFacingStatus.UNAVAILABLE, True, "—"), -] - - -class TestAttachedServiceStatus: - @pytest.mark.parametrize( - "contract_status,uf_status,in_inapplicable_resources,expected_status", - ATTACHED_SERVICE_STATUS_PARAMETERS, - ) - def test_status( - self, - contract_status, - uf_status, - in_inapplicable_resources, - expected_status, - FakeConfig, - ): - ent = mock.MagicMock() - ent.name = "test_entitlement" - ent.contract_status.return_value = contract_status - ent.user_facing_status.return_value = ( - uf_status, - messages.NamedMessage("test-code", ""), - ) - - unavailable_resources = ( - {ent.name: ""} if in_inapplicable_resources else {} - ) - ret = FakeConfig()._attached_service_status(ent, unavailable_resources) - - assert expected_status == ret["status"] - - @pytest.mark.parametrize( - "blocking_incompatible_services, expected_blocked_by", - ( - ([], []), + "global_https", + "global_http", + None, + None, + "apt_https", + "apt_http", + ), ( - [ - IncompatibleService( - FIPSEntitlement, messages.NamedMessage("code", "msg") - ) - ], - [{"name": "fips", "reason": "msg", "reason_code": "code"}], + None, + None, + True, + "one", + None, + True, + None, + None, + "snap", + "global_https", + "global_http", + "ua_https", + "ua_http", + "apt_https", + "apt_http", ), ( - [ - IncompatibleService( - FIPSEntitlement, messages.NamedMessage("code", "msg") - ), - IncompatibleService( - ROSEntitlement, messages.NamedMessage("code2", "msg2") - ), - ], - [ - {"name": "fips", "reason": "msg", "reason_code": "code"}, - {"name": "ros", "reason": "msg2", "reason_code": "code2"}, - ], + None, + None, + True, + "one", + "two", + True, + None, + None, + "snap", + None, + "global_http", + None, + None, + None, + "apt_http", ), - ), - ) - def test_blocked_by( - self, - blocking_incompatible_services, - expected_blocked_by, - tmpdir, - FakeConfig, - ): - cfg = UAConfig({"data_dir": tmpdir.strpath}) - ent = ConcreteTestEntitlement( - blocking_incompatible_services=blocking_incompatible_services - ) - service_status = cfg._attached_service_status(ent, []) - assert service_status["blocked_by"] == expected_blocked_by - - -class TestProcessConfig: - @pytest.mark.parametrize( - "http_proxy, https_proxy, snap_is_installed, snap_http_val, " - "snap_https_val, livepatch_enabled, livepatch_http_val, " - "livepatch_https_val, snap_livepatch_msg", - [ - ("http", "https", False, None, None, False, None, None, ""), - ("http", "https", True, None, None, False, None, None, ""), - ("http", "https", False, None, None, True, None, None, ""), - ("http", "https", True, None, None, True, None, None, ""), - (None, None, True, None, None, True, None, None, ""), - (None, None, True, "one", None, True, None, None, "snap"), - (None, None, True, "one", "two", True, None, None, "snap"), ( None, None, @@ -1398,6 +814,12 @@ "three", None, "snap, livepatch", + "global_htttps", + None, + "ua_https", + None, + "apt_https", + None, ), ( None, @@ -1409,6 +831,12 @@ "three", "four", "snap, livepatch", + "global_https", + None, + None, + "ua_http", + None, + None, ), ( None, @@ -1420,6 +848,12 @@ "three", "four", "livepatch", + None, + None, + None, + None, + None, + None, ), ], ) @@ -1433,8 +867,10 @@ @mock.patch("uaclient.snap.configure_snap_proxy") @mock.patch("uaclient.snap.is_installed") @mock.patch("uaclient.apt.setup_apt_proxy") + @mock.patch("uaclient.config.UAConfig.write_cfg") def test_process_config( self, + m_write_cfg, m_apt_configure_proxy, m_snap_is_installed, m_snap_configure_proxy, @@ -1452,12 +888,19 @@ livepatch_http_val, livepatch_https_val, snap_livepatch_msg, + global_https, + global_http, + ua_https, + ua_http, + apt_https, + apt_http, capsys, + tmpdir, ): m_snap_is_installed.return_value = snap_is_installed m_snap_get_config_option.side_effect = [snap_http_val, snap_https_val] m_livepatch_status.return_value = ( - (status.ApplicationStatus.ENABLED, None) + (ApplicationStatus.ENABLED, None) if livepatch_enabled else (None, None) ) @@ -1468,53 +911,107 @@ cfg = UAConfig( { "ua_config": { - "apt_http_proxy": "apt_http", - "apt_https_proxy": "apt_https", + "apt_http_proxy": apt_http, + "apt_https_proxy": apt_https, + "global_apt_https_proxy": global_https, + "global_apt_http_proxy": global_http, + "ua_apt_https_proxy": ua_https, + "ua_apt_http_proxy": ua_http, "http_proxy": http_proxy, "https_proxy": https_proxy, "update_messaging_timer": 21600, "update_status_timer": 43200, "metering_timer": 0, - } + }, + "data_dir": tmpdir.strpath, } ) - cfg.process_config() - - assert [ - mock.call("http", "apt_http", util.PROXY_VALIDATION_APT_HTTP_URL), - mock.call( - "https", "apt_https", util.PROXY_VALIDATION_APT_HTTPS_URL - ), - mock.call("http", http_proxy, util.PROXY_VALIDATION_SNAP_HTTP_URL), - mock.call( - "https", https_proxy, util.PROXY_VALIDATION_SNAP_HTTPS_URL - ), - ] == m_validate_proxy.call_args_list - - assert [ - mock.call("apt_http", "apt_https") - ] == m_apt_configure_proxy.call_args_list + if global_https is None and apt_https is not None: + global_https = apt_https + if global_http is None and apt_http is not None: + global_http = apt_http + + exc = False + if global_https or global_http: + if ua_https or ua_http: + exc = True + with pytest.raises( + exceptions.UserFacingError, + match=messages.ERROR_PROXY_CONFIGURATION, + ): + cfg.process_config() + if exc is False: + cfg.process_config() - if snap_is_installed: assert [ - mock.call(http_proxy, https_proxy) - ] == m_snap_configure_proxy.call_args_list + mock.call( + "http", global_http, util.PROXY_VALIDATION_APT_HTTP_URL + ), + mock.call( + "https", global_https, util.PROXY_VALIDATION_APT_HTTPS_URL + ), + mock.call("http", ua_http, util.PROXY_VALIDATION_APT_HTTP_URL), + mock.call( + "https", ua_https, util.PROXY_VALIDATION_APT_HTTPS_URL + ), + mock.call( + "http", http_proxy, util.PROXY_VALIDATION_SNAP_HTTP_URL + ), + mock.call( + "https", https_proxy, util.PROXY_VALIDATION_SNAP_HTTPS_URL + ), + ] == m_validate_proxy.call_args_list - if livepatch_enabled: - assert [ - mock.call(http_proxy, https_proxy) - ] == m_livepatch_configure_proxy.call_args_list + if global_http or global_https: + assert [ + mock.call( + global_http, global_https, apt.AptProxyScope.GLOBAL + ) + ] == m_apt_configure_proxy.call_args_list + elif ua_http or ua_https: + assert [ + mock.call(ua_http, ua_https, apt.AptProxyScope.UACLIENT) + ] == m_apt_configure_proxy.call_args_list + else: + assert [] == m_apt_configure_proxy.call_args_list - expected_out = "" - if snap_livepatch_msg: - expected_out = messages.PROXY_DETECTED_BUT_NOT_CONFIGURED.format( # noqa: E501 - services=snap_livepatch_msg - ) + if snap_is_installed: + assert [ + mock.call(http_proxy, https_proxy) + ] == m_snap_configure_proxy.call_args_list + + if livepatch_enabled: + assert [ + mock.call(http_proxy, https_proxy) + ] == m_livepatch_configure_proxy.call_args_list + + expected_out = "" + if snap_livepatch_msg: + expected_out = messages.PROXY_DETECTED_BUT_NOT_CONFIGURED.format( # noqa: E501 + services=snap_livepatch_msg + ) - out, err = capsys.readouterr() - assert expected_out.strip() == out.strip() - assert "" == err + out, err = capsys.readouterr() + expected_out = """ + Using deprecated "{apt}" config field. + Please migrate to using "{global_}" + """ + if apt_http and not global_http: + assert ( + expected_out.format( + apt=apt_http, global_=global_http + ).strip() + == out.strip() + ) + if apt_https and not global_https: + assert ( + expected_out.format( + apt=apt_https, global_=global_https + ).strip() + == out.strip() + ) + assert "" == err def test_process_config_errors_for_wrong_timers(self): cfg = UAConfig( @@ -1542,7 +1039,7 @@ ): cwd = os.getcwd() with mock.patch.dict("uaclient.config.os.environ", values={}): - config = parse_config() + config, _ = parse_config() expected_calls = [ mock.call("{}/uaclient.conf".format(cwd)), mock.call("/etc/ubuntu-advantage/uaclient.conf"), @@ -1554,43 +1051,33 @@ "data_dir": "/var/lib/ubuntu-advantage", "log_file": "/var/log/ubuntu-advantage.log", "timer_log_file": "/var/log/ubuntu-advantage-timer.log", - "license_check_log_file": "/var/log/ubuntu-advantage-license-check.log", # noqa: E501 + "daemon_log_file": "/var/log/ubuntu-advantage-daemon.log", # noqa: E501 "log_level": "INFO", } assert expected_default_config == config - @pytest.mark.parametrize("caplog_text", [logging.WARNING], indirect=True) @pytest.mark.parametrize( - "config_dict,warnings", + "config_dict,expected_invalid_keys", ( ({"contract_url": "http://abc", "security_url": "http:xyz"}, []), ( {"contract_urs": "http://abc", "security_url": "http:xyz"}, - [ - "Ignoring invalid uaclient.conf key:" - " contract_urs=http://abc\n" - ], + ["contract_urs"], ), ), ) - def test_parse_config_warns_and_ignores_invalid_config( - self, config_dict, warnings, caplog_text, tmpdir + def test_parse_config_returns_invalid_keys( + self, config_dict, expected_invalid_keys, tmpdir ): config_file = tmpdir.join("uaclient.conf") config_file.write(yaml.dump(config_dict)) env_vars = {"UA_CONFIG_FILE": config_file.strpath} with mock.patch.dict("uaclient.config.os.environ", values=env_vars): - cfg = parse_config(config_file.strpath) - expected = copy.deepcopy(CONFIG_DEFAULTS) + cfg, invalid_keys = parse_config(config_file.strpath) + assert set(expected_invalid_keys) == invalid_keys for key, value in config_dict.items(): if key in VALID_UA_CONFIG_KEYS: - expected[key] = config_dict[key] - warn_logs = caplog_text() - for warning in warnings: - assert warning in warn_logs - if not warnings: - assert "Ignoring invalid uaclient.conf key" not in warn_logs - assert expected == cfg + assert config_dict[key] == cfg[key] @pytest.mark.parametrize( "envvar_name,envvar_val,field,expected_val", @@ -1632,7 +1119,7 @@ ): user_values = {envvar_name: envvar_val} with mock.patch.dict("uaclient.config.os.environ", values=user_values): - config = parse_config() + config, _ = parse_config() assert expected_val == config[field] @mock.patch("uaclient.config.os.path.exists", return_value=False) @@ -1642,7 +1129,7 @@ "UA_FEATURES_A_B_C": "ABC_VAL", } with mock.patch.dict("uaclient.config.os.environ", values=user_values): - config = parse_config() + config, _ = parse_config() expected_config = { "features": {"a_b_c": "ABC_VAL", "x_y_z": "XYZ_VAL"} } @@ -1674,7 +1161,7 @@ user_values = {"UA_FEATURES_TEST": "test.yaml"} with mock.patch.dict("uaclient.config.os.environ", values=user_values): - cfg = parse_config() + cfg, _ = parse_config() assert {"test": True, "foo": "bar"} == cfg["features"]["test"] diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_contract.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_contract.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_contract.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_contract.py 2022-05-18 19:44:15.000000000 +0000 @@ -15,6 +15,7 @@ UAContractClient, get_available_resources, get_contract_information, + is_contract_changed, process_entitlement_delta, request_updated_contract, ) @@ -126,6 +127,53 @@ mock.call("/v1/contracts/cId/context/machines/machineId", **params) ] + @pytest.mark.parametrize("machine_id_param", (("attach-machine-id"))) + @pytest.mark.parametrize( + "machine_id_response", (("contract-machine-id"), None) + ) + @pytest.mark.parametrize( + "detach,expected_http_method", + ((None, "POST"), (False, "POST"), (True, "DELETE")), + ) + @pytest.mark.parametrize("activity_id", ((None), ("test-acid"))) + @mock.patch("uaclient.contract.util.get_platform_info") + @mock.patch.object(UAContractClient, "_get_platform_data") + def test_get_updated_contract_info( + self, + m_platform_data, + get_platform_info, + get_machine_id, + request_url, + detach, + expected_http_method, + machine_id_response, + machine_id_param, + activity_id, + FakeConfig, + ): + def fake_platform_data(machine_id): + machine_id = "machine-id" if not machine_id else machine_id + return {"machineId": machine_id} + + m_platform_data.side_effect = fake_platform_data + get_platform_info.return_value = {"arch": "arch", "kernel": "kernel"} + get_machine_id.return_value = "machineId" + machine_token = {"machineTokenInfo": {}} + if machine_id_response: + machine_token["machineTokenInfo"][ + "machineId" + ] = machine_id_response + request_url.return_value = (machine_token, {}) + kwargs = { + "machine_token": "mToken", + "contract_id": "cId", + "machine_id": machine_id_param, + } + cfg = FakeConfig.for_attached_machine() + client = UAContractClient(cfg) + resp = client.get_updated_contract_info(**kwargs) + assert resp == machine_token + def test_request_resource_machine_access( self, get_machine_id, request_url, FakeConfig ): @@ -293,7 +341,7 @@ class TestProcessEntitlementDeltas: - def test_error_on_missing_entitlement_type(self): + def test_error_on_missing_entitlement_type(self, FakeConfig): """Raise an error when neither dict contains entitlement type.""" new_access = {"entitlement": {"something": "non-empty"}} error_msg = ( @@ -301,18 +349,22 @@ " {{}} {}".format(new_access) ) with pytest.raises(exceptions.UserFacingError) as exc: - process_entitlement_delta({}, new_access) + process_entitlement_delta( + cfg=FakeConfig(), orig_access={}, new_access=new_access + ) assert error_msg == str(exc.value.msg) - def test_no_delta_on_equal_dicts(self): + def test_no_delta_on_equal_dicts(self, FakeConfig): """No deltas are reported or processed when dicts are equal.""" assert ({}, False) == process_entitlement_delta( - {"entitlement": {"no": "diff"}}, {"entitlement": {"no": "diff"}} + cfg=FakeConfig(), + orig_access={"entitlement": {"no": "diff"}}, + new_access={"entitlement": {"no": "diff"}}, ) @mock.patch(M_REPO_PATH + "process_contract_deltas") def test_deltas_handled_by_entitlement_process_contract_deltas( - self, m_process_contract_deltas + self, m_process_contract_deltas, FakeConfig ): """Call entitlement.process_contract_deltas to handle any deltas.""" m_process_contract_deltas.return_value = True @@ -321,7 +373,9 @@ new_access["entitlement"]["newkey"] = "newvalue" expected = {"entitlement": {"newkey": "newvalue"}} assert (expected, True) == process_entitlement_delta( - original_access, new_access + cfg=FakeConfig(), + orig_access=original_access, + new_access=new_access, ) expected_calls = [ mock.call(original_access, expected, allow_enable=False) @@ -329,12 +383,16 @@ assert expected_calls == m_process_contract_deltas.call_args_list @mock.patch(M_REPO_PATH + "process_contract_deltas") - def test_full_delta_on_empty_orig_dict(self, m_process_contract_deltas): + def test_full_delta_on_empty_orig_dict( + self, m_process_contract_deltas, FakeConfig + ): """Process and report full deltas on empty original access dict.""" # Limit delta processing logic to handle attached state-A to state-B # Fresh installs will have empty/unset new_access = {"entitlement": {"type": "esm-infra", "other": "val2"}} - actual, _ = process_entitlement_delta({}, new_access) + actual, _ = process_entitlement_delta( + cfg=FakeConfig(), orig_access={}, new_access=new_access + ) assert new_access == actual expected_calls = [mock.call({}, new_access, allow_enable=False)] assert expected_calls == m_process_contract_deltas.call_args_list @@ -345,7 +403,7 @@ ) @mock.patch(M_REPO_PATH + "process_contract_deltas") def test_overrides_applied_before_comparison( - self, m_process_contract_deltas, _ + self, m_process_contract_deltas, _, FakeConfig ): old_access = {"entitlement": {"type": "esm", "some_key": "some_value"}} new_access = { @@ -356,7 +414,9 @@ } } - process_entitlement_delta(old_access, new_access) + process_entitlement_delta( + cfg=FakeConfig(), orig_access=old_access, new_access=new_access + ) assert 0 == m_process_contract_deltas.call_count @@ -738,8 +798,11 @@ # of dict key ordering. process_calls = [ mock.call( - {"entitlement": {"entitled": True, "type": "ent1"}}, - { + cfg=cfg, + orig_access={ + "entitlement": {"entitled": True, "type": "ent1"} + }, + new_access={ "entitlement": { "entitled": True, "type": "ent1", @@ -750,10 +813,61 @@ series_overrides=True, ), mock.call( - {"entitlement": {"entitled": False, "type": "ent2"}}, - {"entitlement": {"entitled": False, "type": "ent2"}}, + cfg=cfg, + orig_access={ + "entitlement": {"entitled": False, "type": "ent2"} + }, + new_access={ + "entitlement": {"entitled": False, "type": "ent2"} + }, allow_enable=False, series_overrides=True, ), ] assert process_calls == process_entitlement_delta.call_args_list + + +@mock.patch("uaclient.contract.UAContractClient.get_updated_contract_info") +class TestContractChanged: + @pytest.mark.parametrize("has_contract_expired", (False, True)) + def test_contract_change_with_expiry( + self, get_updated_contract_info, has_contract_expired, FakeConfig + ): + if has_contract_expired: + expiry_date = "2041-05-08T19:02:26Z" + ret_val = True + else: + expiry_date = "2040-05-08T19:02:26Z" + ret_val = False + get_updated_contract_info.return_value = { + "machineTokenInfo": { + "contractInfo": { + "effectiveTo": expiry_date, + }, + }, + } + cfg = FakeConfig().for_attached_machine() + assert is_contract_changed(cfg) == ret_val + + @pytest.mark.parametrize("has_contract_changed", (False, True)) + def test_contract_change_with_entitlements( + self, get_updated_contract_info, has_contract_changed, FakeConfig + ): + if has_contract_changed: + resourceEntitlements = [{"type": "token1", "entitled": True}] + resourceTokens = [{"token": "token1", "type": "resource1"}] + else: + resourceTokens = [] + resourceEntitlements = [] + get_updated_contract_info.return_value = { + "machineTokenInfo": { + "machineId": "test_machine_id", + "resourceTokens": resourceTokens, + "contractInfo": { + "effectiveTo": "2040-05-08T19:02:26Z", + "resourceEntitlements": resourceEntitlements, + }, + }, + } + cfg = FakeConfig().for_attached_machine() + assert is_contract_changed(cfg) == has_contract_changed diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_daemon.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_daemon.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_daemon.py 1970-01-01 00:00:00.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_daemon.py 2022-05-18 19:44:15.000000000 +0000 @@ -0,0 +1,510 @@ +import mock +import pytest + +from uaclient import exceptions, messages +from uaclient.clouds.aws import UAAutoAttachAWSInstance +from uaclient.clouds.gcp import UAAutoAttachGCPInstance +from uaclient.daemon import ( + attempt_auto_attach, + poll_for_pro_license, + start, + stop, +) + +M_PATH = "uaclient.daemon." + + +@mock.patch(M_PATH + "util.subp") +class TestStart: + def test_start_success(self, m_subp): + start() + assert [ + mock.call(["systemctl", "start", "ubuntu-advantage.service"]) + ] == m_subp.call_args_list + + @mock.patch(M_PATH + "LOG.warning") + def test_start_warning(self, m_log_warning, m_subp): + err = exceptions.ProcessExecutionError("cmd") + m_subp.side_effect = err + start() + assert [ + mock.call(["systemctl", "start", "ubuntu-advantage.service"]) + ] == m_subp.call_args_list + assert [mock.call(err)] == m_log_warning.call_args_list + + +@mock.patch(M_PATH + "util.subp") +class TestStop: + def test_stop_success(self, m_subp): + stop() + assert [ + mock.call(["systemctl", "stop", "ubuntu-advantage.service"]) + ] == m_subp.call_args_list + + @mock.patch(M_PATH + "LOG.warning") + def test_stop_warning(self, m_log_warning, m_subp): + err = exceptions.ProcessExecutionError("cmd") + m_subp.side_effect = err + stop() + assert [ + mock.call(["systemctl", "stop", "ubuntu-advantage.service"]) + ] == m_subp.call_args_list + assert [mock.call(err)] == m_log_warning.call_args_list + + +time_mock_curr_value = 0 + + +def time_mock_side_effect_increment_by(increment): + def _time_mock_side_effect(): + global time_mock_curr_value + time_mock_curr_value += increment + return time_mock_curr_value + + return _time_mock_side_effect + + +@mock.patch(M_PATH + "LOG.debug") +@mock.patch(M_PATH + "actions.auto_attach") +@mock.patch(M_PATH + "lock.SpinLock") +class TestAttemptAutoAttach: + def test_success( + self, m_spin_lock, m_auto_attach, m_log_debug, FakeConfig + ): + cfg = FakeConfig() + cloud = mock.MagicMock() + + attempt_auto_attach(cfg, cloud) + + assert [ + mock.call(cfg=cfg, lock_holder="ua.daemon.attempt_auto_attach") + ] == m_spin_lock.call_args_list + assert [mock.call(cfg, cloud)] == m_auto_attach.call_args_list + assert [ + mock.call("Successful auto attach") + ] == m_log_debug.call_args_list + + @mock.patch(M_PATH + "LOG.error") + def test_lock_held( + self, m_log_error, m_spin_lock, m_auto_attach, m_log_debug, FakeConfig + ): + err = exceptions.LockHeldError("test", "test_holder", 1) + m_spin_lock.side_effect = err + cfg = FakeConfig() + cfg.add_notice = mock.MagicMock() + cloud = mock.MagicMock() + + attempt_auto_attach(cfg, cloud) + assert [ + mock.call(cfg=cfg, lock_holder="ua.daemon.attempt_auto_attach") + ] == m_spin_lock.call_args_list + assert [] == m_auto_attach.call_args_list + assert [mock.call(err)] == m_log_error.call_args_list + assert [ + mock.call( + "", + messages.NOTICE_DAEMON_AUTO_ATTACH_LOCK_HELD.format( + operation="test_holder" + ), + ) + ] == cfg.add_notice.call_args_list + assert [ + mock.call("Failed to auto attach") + ] == m_log_debug.call_args_list + + @mock.patch(M_PATH + "lock.clear_lock_file_if_present") + @mock.patch(M_PATH + "LOG.exception") + def test_exception( + self, + m_log_exception, + m_clear_lock, + m_spin_lock, + m_auto_attach, + m_log_debug, + FakeConfig, + ): + err = Exception() + m_auto_attach.side_effect = err + cfg = FakeConfig() + cfg.add_notice = mock.MagicMock() + cloud = mock.MagicMock() + + attempt_auto_attach(cfg, cloud) + + assert [ + mock.call(cfg=cfg, lock_holder="ua.daemon.attempt_auto_attach") + ] == m_spin_lock.call_args_list + assert [mock.call(cfg, cloud)] == m_auto_attach.call_args_list + assert [mock.call(err)] == m_log_exception.call_args_list + assert [ + mock.call("", messages.NOTICE_DAEMON_AUTO_ATTACH_FAILED) + ] == cfg.add_notice.call_args_list + assert [mock.call()] == m_clear_lock.call_args_list + assert [ + mock.call("Failed to auto attach") + ] == m_log_debug.call_args_list + + +@mock.patch(M_PATH + "LOG.debug") +@mock.patch(M_PATH + "time.sleep") +@mock.patch(M_PATH + "time.time") +@mock.patch(M_PATH + "attempt_auto_attach") +@mock.patch(M_PATH + "UAAutoAttachGCPInstance.is_pro_license_present") +@mock.patch(M_PATH + "UAAutoAttachGCPInstance.should_poll_for_pro_license") +@mock.patch(M_PATH + "cloud_instance_factory") +@mock.patch(M_PATH + "util.is_current_series_lts") +@mock.patch(M_PATH + "util.is_config_value_true") +class TestPollForProLicense: + @pytest.mark.parametrize( + "is_config_value_true," + "is_attached," + "is_current_series_lts," + "cloud_instance," + "should_poll," + "is_pro_license_present," + "cfg_poll_for_pro_licenses," + "expected_log_debug_calls," + "expected_is_pro_license_present_calls," + "expected_attempt_auto_attach_calls", + [ + ( + True, + None, + None, + None, + None, + None, + None, + [mock.call("Configured to not auto attach, shutting down")], + [], + [], + ), + ( + False, + True, + None, + None, + None, + None, + None, + [mock.call("Already attached, shutting down")], + [], + [], + ), + ( + False, + False, + False, + None, + None, + None, + None, + [mock.call("Not on LTS, shutting down")], + [], + [], + ), + ( + False, + False, + True, + exceptions.CloudFactoryError("none"), + None, + None, + None, + [mock.call("Not on cloud, shutting down")], + [], + [], + ), + ( + False, + False, + True, + UAAutoAttachAWSInstance(), + None, + None, + None, + [mock.call("Not on gcp, shutting down")], + [], + [], + ), + ( + False, + False, + True, + UAAutoAttachGCPInstance(), + False, + None, + None, + [mock.call("Not on supported instance, shutting down")], + [], + [], + ), + ( + False, + False, + True, + UAAutoAttachGCPInstance(), + True, + True, + None, + [], + [mock.call(wait_for_change=False)], + [mock.call(mock.ANY, mock.ANY)], + ), + ( + False, + False, + True, + UAAutoAttachGCPInstance(), + True, + exceptions.CancelProLicensePolling(), + None, + [mock.call("Cancelling polling")], + [mock.call(wait_for_change=False)], + [], + ), + ( + False, + False, + True, + UAAutoAttachGCPInstance(), + True, + False, + False, + [ + mock.call( + "Configured to not poll for pro license, shutting down" + ) + ], + [mock.call(wait_for_change=False)], + [], + ), + ( + False, + False, + True, + UAAutoAttachGCPInstance(), + True, + False, + False, + [ + mock.call( + "Configured to not poll for pro license, shutting down" + ) + ], + [mock.call(wait_for_change=False)], + [], + ), + ], + ) + def test_before_polling_loop_checks( + self, + m_is_config_value_true, + m_is_current_series_lts, + m_cloud_instance_factory, + m_should_poll, + m_is_pro_license_present, + m_attempt_auto_attach, + m_time, + m_sleep, + m_log_debug, + is_config_value_true, + is_attached, + is_current_series_lts, + cloud_instance, + should_poll, + is_pro_license_present, + cfg_poll_for_pro_licenses, + expected_log_debug_calls, + expected_is_pro_license_present_calls, + expected_attempt_auto_attach_calls, + FakeConfig, + ): + if is_attached: + cfg = FakeConfig.for_attached_machine() + else: + cfg = FakeConfig() + cfg.cfg.update( + {"ua_config": {"poll_for_pro_license": cfg_poll_for_pro_licenses}} + ) + + m_is_config_value_true.return_value = is_config_value_true + m_is_current_series_lts.return_value = is_current_series_lts + m_cloud_instance_factory.side_effect = [cloud_instance] + m_should_poll.return_value = should_poll + m_is_pro_license_present.side_effect = [is_pro_license_present] + + poll_for_pro_license(cfg) + + assert expected_log_debug_calls == m_log_debug.call_args_list + assert ( + expected_is_pro_license_present_calls + == m_is_pro_license_present.call_args_list + ) + assert ( + expected_attempt_auto_attach_calls + == m_attempt_auto_attach.call_args_list + ) + + @pytest.mark.parametrize( + "is_pro_license_present_side_effect," + "time_side_effect," + "expected_is_pro_license_present_calls," + "expected_attempt_auto_attach_calls," + "expected_log_debug_calls," + "expected_sleep_calls", + [ + ( + [False, True], + time_mock_side_effect_increment_by(100), + [ + mock.call(wait_for_change=False), + mock.call(wait_for_change=True), + ], + [mock.call(mock.ANY, mock.ANY)], + [], + [], + ), + ( + [False, False, False, False, False, True], + time_mock_side_effect_increment_by(100), + [ + mock.call(wait_for_change=False), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + ], + [mock.call(mock.ANY, mock.ANY)], + [], + [], + ), + ( + [False, False, True], + time_mock_side_effect_increment_by(1), + [ + mock.call(wait_for_change=False), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + ], + [mock.call(mock.ANY, mock.ANY)], + [ + mock.call( + "wait_for_change returned quickly and no pro license" + " present. Waiting 123 seconds before polling again" + ) + ], + [mock.call(123)], + ), + ( + [False, False, False, False, False, True], + time_mock_side_effect_increment_by(1), + [ + mock.call(wait_for_change=False), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + ], + [mock.call(mock.ANY, mock.ANY)], + [ + mock.call(mock.ANY), + mock.call(mock.ANY), + mock.call(mock.ANY), + mock.call(mock.ANY), + ], + [ + mock.call(123), + mock.call(123), + mock.call(123), + mock.call(123), + ], + ), + ( + [ + False, + False, + exceptions.DelayProLicensePolling(), + False, + exceptions.DelayProLicensePolling(), + True, + ], + time_mock_side_effect_increment_by(100), + [ + mock.call(wait_for_change=False), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + ], + [mock.call(mock.ANY, mock.ANY)], + [], + [mock.call(123), mock.call(123)], + ), + ( + [False, False, exceptions.CancelProLicensePolling()], + time_mock_side_effect_increment_by(100), + [ + mock.call(wait_for_change=False), + mock.call(wait_for_change=True), + mock.call(wait_for_change=True), + ], + [], + [mock.call("Cancelling polling")], + [], + ), + ], + ) + def test_polling_loop( + self, + m_is_config_value_true, + m_is_current_series_lts, + m_cloud_instance_factory, + m_should_poll, + m_is_pro_license_present, + m_attempt_auto_attach, + m_time, + m_sleep, + m_log_debug, + is_pro_license_present_side_effect, + time_side_effect, + expected_is_pro_license_present_calls, + expected_attempt_auto_attach_calls, + expected_log_debug_calls, + expected_sleep_calls, + FakeConfig, + ): + cfg = FakeConfig() + cfg.cfg.update( + { + "ua_config": { + "poll_for_pro_license": True, + "polling_error_retry_delay": 123, + } + } + ) + + m_is_config_value_true.return_value = False + m_is_current_series_lts.return_value = True + m_cloud_instance_factory.return_value = UAAutoAttachGCPInstance() + m_should_poll.return_value = True + m_is_pro_license_present.side_effect = ( + is_pro_license_present_side_effect + ) + m_time.side_effect = time_side_effect + + poll_for_pro_license(cfg) + + assert expected_sleep_calls == m_sleep.call_args_list + assert expected_log_debug_calls == m_log_debug.call_args_list + assert ( + expected_is_pro_license_present_calls + == m_is_pro_license_present.call_args_list + ) + assert ( + expected_attempt_auto_attach_calls + == m_attempt_auto_attach.call_args_list + ) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_event_logger.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_event_logger.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_event_logger.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_event_logger.py 2022-05-18 19:44:15.000000000 +0000 @@ -25,6 +25,7 @@ event.error(error_msg="error1", error_code="error1-code") event.error(error_msg="error2", service="esm") event.error(error_msg="error3", error_type="exception") + event.error(error_msg="error4", additional_info={"test": 123}) event.warning(warning_msg="warning1") event.warning(warning_msg="warning2", service="esm") event.process_events() @@ -52,6 +53,13 @@ "service": None, "type": "exception", }, + { + "message": "error4", + "message_code": None, + "service": None, + "type": "system", + "additional_info": {"test": 123}, + }, ], "warnings": [ { diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_reboot_cmds.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_reboot_cmds.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_reboot_cmds.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_reboot_cmds.py 2022-05-18 19:44:15.000000000 +0000 @@ -10,7 +10,11 @@ run_command, ) from uaclient.exceptions import ProcessExecutionError -from uaclient.messages import REBOOT_SCRIPT_FAILED +from uaclient.messages import ( + FIPS_REBOOT_REQUIRED_MSG, + FIPS_SYSTEM_REBOOT_REQUIRED, + REBOOT_SCRIPT_FAILED, +) M_FIPS_PATH = "uaclient.entitlements.fips.FIPSEntitlement." @@ -77,8 +81,10 @@ @mock.patch("sys.exit") @mock.patch(M_FIPS_PATH + "install_packages") @mock.patch(M_FIPS_PATH + "setup_apt_config") + @mock.patch("uaclient.config.UAConfig.remove_notice") def test_calls_setup_apt_config_and_install_packages_when_enabled( self, + m_remove_notice, setup_apt_config, install_packages, exit, @@ -97,9 +103,14 @@ assert [ mock.call(cleanup_on_failure=False) ] == install_packages.call_args_list + assert [ + mock.call("", FIPS_SYSTEM_REBOOT_REQUIRED.msg), + mock.call("", FIPS_REBOOT_REQUIRED_MSG), + ] == m_remove_notice.call_args_list else: assert 0 == setup_apt_config.call_count assert 0 == install_packages.call_count + assert 0 == len(m_remove_notice.call_args_list) assert 0 == exit.call_count diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_security.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_security.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_security.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_security.py 2022-05-18 19:44:15.000000000 +0000 @@ -7,10 +7,16 @@ from uaclient import exceptions from uaclient.clouds.identity import NoCloudTypeReason +from uaclient.entitlements.entitlement_status import ( + ApplicabilityStatus, + UserFacingStatus, +) from uaclient.messages import ( ENABLE_REBOOT_REQUIRED_TMPL, FAIL_X, OKGREEN_CHECK, + PROMPT_ENTER_TOKEN, + PROMPT_EXPIRED_ENTER_TOKEN, SECURITY_APT_NON_ROOT, SECURITY_ISSUE_NOT_RESOLVED, SECURITY_SERVICE_DISABLED, @@ -41,13 +47,7 @@ upgrade_packages_and_attach, version_cmp_le, ) -from uaclient.status import ( - PROMPT_ENTER_TOKEN, - PROMPT_EXPIRED_ENTER_TOKEN, - ApplicabilityStatus, - UserFacingStatus, - colorize_commands, -) +from uaclient.status import colorize_commands M_PATH = "uaclient.contract." M_REPO_PATH = "uaclient.entitlements.repo.RepoEntitlement." @@ -131,7 +131,7 @@ "instructions": "In general, a standard system update will make all ...\n", "references": [], "release_packages": { - "trusty": [ + "series-example-1": [ { "description": "SMB/CIFS file, print, and login ... Unix", "is_source": True, @@ -146,7 +146,7 @@ "version_link": "https://....11+dfsg-0ubuntu0.14.04.20+esm9", }, ], - "bionic": [ + "series-example-2": [ { "description": "high-level 3D graphics kit implementing ...", "is_source": True, @@ -405,7 +405,7 @@ "series,expected", ( ( - "trusty", + "series-example-1", { "samba": { "source": { @@ -431,7 +431,7 @@ }, ), ( - "bionic", + "series-example-2", { "coin3": { "source": { @@ -454,7 +454,7 @@ } }, ), - ("focal", {}), + ("series-example-3", {}), ), ) @mock.patch("uaclient.util.get_platform_info") @@ -489,10 +489,10 @@ self, get_platform_info, source_link, error_msg, FakeConfig ): """Raise errors when USN metadata contains no valid source_link.""" - get_platform_info.return_value = {"series": "trusty"} + get_platform_info.return_value = {"series": "series-example-1"} client = UASecurityClient(FakeConfig()) sparse_md = copy.deepcopy(SAMPLE_USN_RESPONSE) - sparse_md["release_packages"]["trusty"].append( + sparse_md["release_packages"]["series-example-1"].append( { "is_source": False, "name": "samba2", @@ -897,21 +897,15 @@ ), ), ) - @pytest.mark.parametrize("series", ("trusty", "bionic")) @mock.patch("uaclient.security.util.subp") @mock.patch("uaclient.util.get_platform_info") def test_result_keyed_by_source_package_name( - self, get_platform_info, subp, series, dpkg_out, results + self, get_platform_info, subp, dpkg_out, results ): - get_platform_info.return_value = {"series": series} + get_platform_info.return_value = {"series": "bionic"} subp.return_value = dpkg_out, "" assert results == query_installed_source_pkg_versions() - if series == "trusty": - _format = "-f=${Package},${Source},${Version},${Status}\n" - else: - _format = ( - "-f=${Package},${Source},${Version},${db:Status-Status}\n" - ) + _format = "-f=${Package},${Source},${Version},${db:Status-Status}\n" assert [ mock.call(["dpkg-query", _format, "-W"]) ] == subp.call_args_list diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_security_status.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_security_status.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_security_status.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_security_status.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,10 +1,13 @@ -from typing import List, Tuple +from typing import List, Optional import mock import pytest from uaclient.security_status import ( UpdateStatus, + filter_security_updates, + get_origin_for_package, + get_service_name, get_ua_info, get_update_status, security_status, @@ -13,51 +16,77 @@ M_PATH = "uaclient.security_status." -# Each candidate/installed is a tuple of (version, archive, origin, site) +def mock_origin( + component: str, archive: str, origin: str, site: str +) -> mock.MagicMock: + mock_origin = mock.MagicMock() + mock_origin.component = component + mock_origin.archive = archive + mock_origin.origin = origin + mock_origin.site = site + return mock_origin + + +def mock_version( + version: str, origin_list: List[mock.MagicMock] = [] +) -> mock.MagicMock: + mock_version = mock.MagicMock() + mock_version.__gt__ = lambda self, other: self.version > other.version + mock_version.version = version + mock_version.origins = origin_list + return mock_version + + def mock_package( - name, - installed: Tuple[str, str, str, str] = None, - candidates: List[Tuple[str, str, str, str]] = [], + name: str, + installed_version: Optional[mock.MagicMock] = None, + other_versions: List[mock.MagicMock] = [], ): mock_package = mock.MagicMock() mock_package.name = name mock_package.versions = [] - mock_package.is_installed = bool(installed) - - if installed: - mock_installed = mock.MagicMock() - mock_installed.__gt__ = ( - lambda self, other: self.version > other.version - ) - mock_installed.version = installed[0] + mock_package.is_installed = bool(installed_version) - mock_origin = mock.MagicMock() - mock_origin.archive = installed[1] - mock_origin.origin = installed[2] - mock_origin.site = installed[3] - mock_installed.origins = [mock_origin] - - mock_package.installed = mock_installed - mock_package.versions.append(mock_installed) - - for candidate in candidates: - mock_candidate = mock.MagicMock() - mock_candidate.__gt__ = ( - lambda self, other: self.version > other.version - ) + if installed_version: + mock_package.installed = installed_version + installed_version.package = mock_package + mock_package.versions.append(installed_version) + + for version in other_versions: + version.package = mock_package + mock_package.versions.append(version) - mock_candidate.package = mock_package - mock_candidate.version = candidate[0] + if mock_package.versions: + mock_package.candidate = max(mock_package.versions) - mock_origin = mock.MagicMock() - mock_origin.archive = candidate[1] - mock_origin.origin = candidate[2] - mock_origin.site = candidate[3] - mock_candidate.origins = [mock_origin] + return mock_package - mock_package.versions.append(mock_candidate) - return mock_package +MOCK_ORIGINS = { + "now": mock_origin("now", "now", "", ""), + "third-party": mock_origin("main", "", "other", "some.other.site"), + "infra": mock_origin( + "main", "example-infra-security", "UbuntuESM", "esm.ubuntu.com" + ), + "apps": mock_origin( + "main", "example-apps-security", "UbuntuESMApps", "esm.ubuntu.com" + ), + "standard-security": mock_origin( + "main", "example-security", "Ubuntu", "security.ubuntu.com" + ), + "archive_main": mock_origin( + "main", "example-updates", "Ubuntu", "archive.ubuntu.com" + ), + "archive_universe": mock_origin( + "universe", "example-updates", "Ubuntu", "archive.ubuntu.com" + ), +} + +ORIGIN_TO_SERVICE_MOCK = { + ("UbuntuESM", "example-infra-security"): "esm-infra", + ("Ubuntu", "example-security"): "standard-security", + ("UbuntuESMApps", "example-apps-security"): "esm-apps", +} class TestSecurityStatus: @@ -95,7 +124,7 @@ assert get_update_status(service_name, ua_info) == expected_result @pytest.mark.parametrize("is_attached", (True, False)) - @mock.patch(M_PATH + "UAConfig.status") + @mock.patch("uaclient.security_status.status") def test_get_ua_info(self, m_status, is_attached, FakeConfig): if is_attached: cfg = FakeConfig().for_attached_machine() @@ -130,105 +159,192 @@ "entitled_services": [], } - @mock.patch(M_PATH + "UAConfig.status", return_value={"attached": False}) - @mock.patch(M_PATH + "Cache") - def test_finds_updates_for_installed_packages( - self, m_cache, _m_status, FakeConfig + @pytest.mark.parametrize( + "installed_version,other_versions,expected_output", + ( + (mock_version("1.0", [MOCK_ORIGINS["now"]]), [], "unknown"), + ( + mock_version("2.0", [MOCK_ORIGINS["now"]]), + [mock_version("1.0", [MOCK_ORIGINS["archive_main"]])], + "unknown", + ), + ( + mock_version("2.0", [MOCK_ORIGINS["now"]]), + [mock_version("3.0", [MOCK_ORIGINS["archive_main"]])], + "main", + ), + ( + mock_version( + "1.0", [MOCK_ORIGINS["infra"], MOCK_ORIGINS["now"]] + ), + [], + "esm-infra", + ), + ( + mock_version( + "1.0", [MOCK_ORIGINS["apps"], MOCK_ORIGINS["now"]] + ), + [], + "esm-apps", + ), + ( + mock_version( + "1.0", [MOCK_ORIGINS["archive_main"], MOCK_ORIGINS["now"]] + ), + [], + "main", + ), + ( + mock_version( + "1.0", + [MOCK_ORIGINS["archive_universe"], MOCK_ORIGINS["now"]], + ), + [], + "universe", + ), + ( + mock_version( + "1.0", [MOCK_ORIGINS["third-party"], MOCK_ORIGINS["now"]] + ), + [], + "third-party", + ), + ), + ) + def test_get_origin_for_package( + self, installed_version, other_versions, expected_output ): - m_cache.return_value = [ - mock_package(name="not_installed"), + package_mock = mock_package( + "example", installed_version, other_versions + ) + with mock.patch( + M_PATH + "ORIGIN_INFORMATION_TO_SERVICE", + ORIGIN_TO_SERVICE_MOCK, + ): + assert expected_output == get_origin_for_package(package_mock) + + @pytest.mark.parametrize( + "origins_input,expected_output", + ( + ([], ("", "")), + ([MOCK_ORIGINS["now"]], ("", "")), + ([MOCK_ORIGINS["third-party"], MOCK_ORIGINS["now"]], ("", "")), + ( + [MOCK_ORIGINS["infra"], MOCK_ORIGINS["now"]], + ("esm-infra", "esm.ubuntu.com"), + ), + ( + [MOCK_ORIGINS["apps"], MOCK_ORIGINS["now"]], + ("esm-apps", "esm.ubuntu.com"), + ), + ( + [MOCK_ORIGINS["standard-security"], MOCK_ORIGINS["now"]], + ("standard-security", "security.ubuntu.com"), + ), + ), + ) + def test_service_name(self, origins_input, expected_output): + with mock.patch( + M_PATH + "ORIGIN_INFORMATION_TO_SERVICE", + ORIGIN_TO_SERVICE_MOCK, + ): + assert expected_output == get_service_name(origins_input) + + def test_filter_security_updates(self): + expected_return = [ + mock_version("2.0", [MOCK_ORIGINS["infra"]]), + mock_version("2.0", [MOCK_ORIGINS["standard-security"]]), + mock_version("3.0", [MOCK_ORIGINS["apps"]]), + ] + package_list = [ + mock_package(name="not-installed"), mock_package( - name="there_is_no_update", - installed=("1.0", "somewhere", "somehow", ""), + name="there-is-no-update", + installed_version=mock_version( + "1.0", [MOCK_ORIGINS["now"], MOCK_ORIGINS["archive_main"]] + ), ), mock_package( - name="latest_is_installed", - installed=("2.0", "standard-packages", "Ubuntu", ""), - candidates=[ - ( - "1.0", - "example-infra-security", - "UbuntuESM", - "some.url.for.esm", - ) - ], + name="latest-is-installed", + installed_version=mock_version( + "2.0", [MOCK_ORIGINS["now"], MOCK_ORIGINS["infra"]] + ), + other_versions=[mock_version("1.0", [MOCK_ORIGINS["infra"]])], ), mock_package( - name="update_available", - # this is an ESM-INFRA example for the counters - installed=("1.0", "example-infra-security", "UbuntuESM", ""), - candidates=[ - ( - "2.0", - "example-infra-security", - "UbuntuESM", - "some.url.for.esm", - ) - ], + name="update-available", + installed_version=mock_version( + "1.0", [MOCK_ORIGINS["now"], MOCK_ORIGINS["archive_main"]] + ), + other_versions=[expected_return[0]], ), mock_package( - name="not_a_security_update", - installed=("1.0", "somewhere", "somehow", ""), - candidates=[ - ( - "2.0", - "example-notsecurity", - "NotUbuntuESM", - "some.url.for.esm", - ) + name="not-a-security-update", + installed_version=mock_version( + "1.0", [MOCK_ORIGINS["now"], MOCK_ORIGINS["archive_main"]] + ), + other_versions=[ + mock_version("2.0", [MOCK_ORIGINS["archive_main"]]) ], ), mock_package( - name="more_than_one_update_available", - installed=("1.0", "somewhere", "somehow", ""), - candidates=[ - ( - "2.0", - "example-security", - "Ubuntu", - "some.url.for.standard", - ), - ( - "3.0", - "example-infra-security", - "UbuntuESM", - "some.url.for.esm", - ), - ], + name="more-than-one-update", + installed_version=mock_version( + "1.0", [MOCK_ORIGINS["now"], MOCK_ORIGINS["archive_main"]] + ), + other_versions=[expected_return[1], expected_return[2]], ), ] - - service_to_origin_dict = { - "esm-infra": ("UbuntuESM", "example-infra-security"), - "standard-security": ("Ubuntu", "example-security"), - "esm-apps": ("UbuntuESMApps", "example-apps-security"), - } - origin_to_service_dict = { - v: k for k, v in service_to_origin_dict.items() - } - + with mock.patch( + M_PATH + "ORIGIN_INFORMATION_TO_SERVICE", + ORIGIN_TO_SERVICE_MOCK, + ): + filtered_versions = filter_security_updates(package_list) + assert expected_return == filtered_versions + assert [ + "update-available", + "more-than-one-update", + "more-than-one-update", + ] == [v.package.name for v in filtered_versions] + + @mock.patch(M_PATH + "status", return_value={"attached": False}) + @mock.patch( + M_PATH + "get_service_name", + return_value=("esm-infra", "some.url.for.esm"), + ) + @mock.patch(M_PATH + "get_origin_for_package", return_value="main") + @mock.patch(M_PATH + "filter_security_updates") + @mock.patch(M_PATH + "Cache") + def test_security_status_format( + self, + m_cache, + m_filter_sec_updates, + _m_get_origin, + _m_service_name, + _m_status, + FakeConfig, + ): + """Make sure the output format matches the expected JSON""" cfg = FakeConfig() + m_version = mock_version("1.0") + m_package = mock_package("example_package", m_version) + + m_cache.return_value = [m_package] * 10 + m_filter_sec_updates.return_value = [m_version] * 2 expected_output = { "_schema_version": "0.1", "packages": [ { - "package": "update_available", - "version": "2.0", + "package": "example_package", + "version": "1.0", "service_name": "esm-infra", "status": "pending_attach", "origin": "some.url.for.esm", }, { - "package": "more_than_one_update_available", - "version": "2.0", - "service_name": "standard-security", - "status": "upgrade_available", - "origin": "some.url.for.standard", - }, - { - "package": "more_than_one_update_available", - "version": "3.0", + "package": "example_package", + "version": "1.0", "service_name": "esm-infra", "status": "pending_attach", "origin": "some.url.for.esm", @@ -240,19 +356,19 @@ "enabled_services": [], "entitled_services": [], }, - "num_installed_packages": 5, + "num_installed_packages": 10, + "num_main_packages": 10, + "num_restricted_packages": 0, + "num_universe_packages": 0, + "num_multiverse_packages": 0, + "num_third_party_packages": 0, + "num_unknown_packages": 0, + "num_esm_infra_packages": 0, + "num_esm_apps_packages": 0, "num_esm_infra_updates": 2, "num_esm_apps_updates": 0, - "num_esm_infra_packages": 1, - "num_esm_apps_packages": 0, - "num_standard_security_updates": 1, + "num_standard_security_updates": 0, }, } - with mock.patch( - M_PATH + "SERVICE_TO_ORIGIN_INFORMATION", service_to_origin_dict - ), mock.patch( - M_PATH + "ORIGIN_INFORMATION_TO_SERVICE", origin_to_service_dict - ): - output = security_status(cfg) - assert output == expected_output + assert expected_output == security_status(cfg) diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_serviceclient.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_serviceclient.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_serviceclient.py 2022-04-01 13:27:49.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_serviceclient.py 2022-05-18 19:44:15.000000000 +0000 @@ -90,6 +90,7 @@ headers=client.headers(), method=None, timeout=None, + potentially_sensitive=True, ) ] == m_readurl.call_args_list diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_status.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_status.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_status.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_status.py 2022-05-18 19:44:15.000000000 +0000 @@ -1,20 +1,51 @@ +import copy +import datetime +import os +import stat import string import mock import pytest -from uaclient import config +from uaclient import messages, status, version +from uaclient.config import UAConfig +from uaclient.entitlements import ( + ENTITLEMENT_CLASSES, + entitlement_factory, + valid_services, +) +from uaclient.entitlements.base import IncompatibleService +from uaclient.entitlements.entitlement_status import ( + ApplicationStatus, + ContractStatus, + UserFacingConfigStatus, + UserFacingStatus, +) +from uaclient.entitlements.fips import FIPSEntitlement +from uaclient.entitlements.ros import ROSEntitlement +from uaclient.entitlements.tests.test_base import ConcreteTestEntitlement from uaclient.status import ( + DEFAULT_STATUS, TxtColor, - UserFacingStatus, colorize_commands, format_tabular, ) +DEFAULT_CFG_STATUS = { + "execution_status": DEFAULT_STATUS["execution_status"], + "execution_details": DEFAULT_STATUS["execution_details"], +} +M_PATH = "uaclient.entitlements." + +ALL_RESOURCES_AVAILABLE = [ + {"name": name, "available": True} + for name in valid_services(cfg=UAConfig(), allow_beta=True) +] + @pytest.fixture(params=[True, False]) def status_dict_attached(request): - status = config.DEFAULT_STATUS.copy() + status = DEFAULT_STATUS.copy() # The following are required so we don't get an "unattached" error status["attached"] = True @@ -34,7 +65,7 @@ @pytest.fixture def status_dict_unattached(): - status = config.DEFAULT_STATUS.copy() + status = DEFAULT_STATUS.copy() status["services"] = [ { @@ -226,3 +257,682 @@ # Remove key to test upgrade path from older ua-tools status_dict_attached["services"][0].pop("description_override") assert uf_descr in format_tabular(status_dict_attached) + + +@mock.patch("uaclient.config.UAConfig.remove_notice") +@mock.patch("uaclient.util.should_reboot", return_value=False) +class TestStatus: + esm_desc = entitlement_factory( + cfg=UAConfig(), name="esm-infra" + ).description + ros_desc = entitlement_factory(cfg=UAConfig(), name="ros").description + + def check_beta(self, cls, show_beta, uacfg=None, status=""): + if not show_beta: + if status == "enabled": + return False + + if uacfg: + allow_beta = uacfg.cfg.get("features", {}).get( + "allow_beta", False + ) + + if allow_beta: + return False + + return cls.is_beta + + return False + + @pytest.mark.parametrize( + "show_beta,expected_services", + ( + ( + True, + [ + { + "available": "yes", + "name": "esm-infra", + "description": esm_desc, + }, + { + "available": "no", + "name": "ros", + "description": ros_desc, + }, + ], + ), + ( + False, + [ + { + "available": "yes", + "name": "esm-infra", + "description": esm_desc, + } + ], + ), + ), + ) + @mock.patch("uaclient.status.get_available_resources") + @mock.patch("uaclient.status.os.getuid", return_value=0) + def test_root_unattached( + self, + _m_getuid, + m_get_available_resources, + _m_should_reboot, + m_remove_notice, + show_beta, + expected_services, + FakeConfig, + ): + """Test we get the correct status dict when unattached""" + cfg = FakeConfig() + m_get_available_resources.return_value = [ + {"name": "esm-infra", "available": True}, + {"name": "ros", "available": False}, + ] + expected = copy.deepcopy(DEFAULT_STATUS) + expected["version"] = mock.ANY + expected["services"] = expected_services + with mock.patch( + "uaclient.status._get_config_status" + ) as m_get_cfg_status: + m_get_cfg_status.return_value = DEFAULT_CFG_STATUS + assert expected == status.status(cfg=cfg, show_beta=show_beta) + + expected_calls = [ + mock.call( + "", + messages.ENABLE_REBOOT_REQUIRED_TMPL.format( + operation="fix operation" + ), + ) + ] + + assert expected_calls == m_remove_notice.call_args_list + + @pytest.mark.parametrize("show_beta", (True, False)) + @pytest.mark.parametrize( + "features_override", ((None), ({"allow_beta": True})) + ) + @pytest.mark.parametrize( + "avail_res,entitled_res,uf_entitled,uf_status", + ( + ( # Empty lists means UNENTITLED and UNAVAILABLE + [], + [], + ContractStatus.UNENTITLED.value, + UserFacingStatus.UNAVAILABLE.value, + ), + ( # available == False means UNAVAILABLE + [{"name": "livepatch", "available": False}], + [], + ContractStatus.UNENTITLED.value, + UserFacingStatus.UNAVAILABLE.value, + ), + ( # available == True but unentitled means UNAVAILABLE + [{"name": "livepatch", "available": True}], + [], + ContractStatus.UNENTITLED.value, + UserFacingStatus.UNAVAILABLE.value, + ), + ( # available == False and entitled means INAPPLICABLE + [{"name": "livepatch", "available": False}], + [{"type": "livepatch", "entitled": True}], + ContractStatus.ENTITLED.value, + UserFacingStatus.INAPPLICABLE.value, + ), + ), + ) + @mock.patch( + M_PATH + "livepatch.LivepatchEntitlement.application_status", + return_value=(ApplicationStatus.DISABLED, ""), + ) + @mock.patch("uaclient.status.get_available_resources") + @mock.patch("uaclient.config.os.getuid", return_value=0) + def test_root_attached( + self, + _m_getuid, + m_get_avail_resources, + _m_livepatch_status, + _m_should_reboot, + _m_remove_notice, + avail_res, + entitled_res, + uf_entitled, + uf_status, + features_override, + show_beta, + FakeConfig, + ): + """Test we get the correct status dict when attached with basic conf""" + resource_names = [resource["name"] for resource in avail_res] + default_entitled = ContractStatus.UNENTITLED.value + default_status = UserFacingStatus.UNAVAILABLE.value + token = { + "availableResources": [], + "machineTokenInfo": { + "machineId": "test_machine_id", + "accountInfo": { + "id": "acct-1", + "name": "test_account", + "createdAt": "2019-06-14T06:45:50Z", + "externalAccountIDs": [{"IDs": ["id1"], "Origin": "AWS"}], + }, + "contractInfo": { + "id": "cid", + "name": "test_contract", + "createdAt": "2020-05-08T19:02:26Z", + "effectiveFrom": "2000-05-08T19:02:26Z", + "effectiveTo": "2040-05-08T19:02:26Z", + "resourceEntitlements": entitled_res, + "products": ["free"], + }, + }, + } + + available_resource_response = [ + { + "name": cls.name, + "available": bool( + {"name": cls.name, "available": True} in avail_res + ), + } + for cls in ENTITLEMENT_CLASSES + ] + if avail_res: + token["availableResources"] = available_resource_response + else: + m_get_avail_resources.return_value = available_resource_response + + cfg = FakeConfig.for_attached_machine(machine_token=token) + if features_override: + cfg.override_features(features_override) + + expected_services = [ + { + "description": cls.description, + "entitled": uf_entitled + if cls.name in resource_names + else default_entitled, + "name": cls.name, + "status": uf_status + if cls.name in resource_names + else default_status, + "status_details": mock.ANY, + "description_override": None, + "available": mock.ANY, + "blocked_by": [], + } + for cls in ENTITLEMENT_CLASSES + if not self.check_beta(cls, show_beta, cfg) + ] + expected = copy.deepcopy(DEFAULT_STATUS) + expected.update( + { + "version": version.get_version(features=cfg.features), + "attached": True, + "machine_id": "test_machine_id", + "services": expected_services, + "effective": datetime.datetime( + 2000, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc + ), + "expires": datetime.datetime( + 2040, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc + ), + "contract": { + "name": "test_contract", + "id": "cid", + "created_at": datetime.datetime( + 2020, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc + ), + "products": ["free"], + "tech_support_level": "n/a", + }, + "account": { + "name": "test_account", + "id": "acct-1", + "created_at": datetime.datetime( + 2019, 6, 14, 6, 45, 50, tzinfo=datetime.timezone.utc + ), + "external_account_ids": [ + {"IDs": ["id1"], "Origin": "AWS"} + ], + }, + } + ) + with mock.patch( + "uaclient.status._get_config_status" + ) as m_get_cfg_status: + m_get_cfg_status.return_value = DEFAULT_CFG_STATUS + assert expected == status.status(cfg=cfg, show_beta=show_beta) + if avail_res: + assert m_get_avail_resources.call_count == 0 + else: + assert m_get_avail_resources.call_count == 1 + # status() idempotent + with mock.patch( + "uaclient.status._get_config_status" + ) as m_get_cfg_status: + m_get_cfg_status.return_value = DEFAULT_CFG_STATUS + assert expected == status.status(cfg=cfg, show_beta=show_beta) + + @mock.patch("uaclient.status.get_available_resources") + @mock.patch("uaclient.config.os.getuid") + def test_nonroot_unattached_is_same_as_unattached_root( + self, + m_getuid, + m_get_available_resources, + _m_should_reboot, + _m_remove_notice, + FakeConfig, + ): + m_get_available_resources.return_value = [ + {"name": "esm-infra", "available": True} + ] + m_getuid.return_value = 1000 + cfg = FakeConfig() + nonroot_status = status.status(cfg=cfg) + + m_getuid.return_value = 0 + root_unattached_status = status.status(cfg=cfg) + + assert root_unattached_status == nonroot_status + + @mock.patch("uaclient.status.get_available_resources") + @mock.patch("uaclient.status.os.getuid") + def test_root_followed_by_nonroot( + self, + m_getuid, + m_get_available_resources, + _m_should_reboot, + _m_remove_notice, + tmpdir, + FakeConfig, + ): + """Ensure that non-root run after root returns data""" + cfg = UAConfig({"data_dir": tmpdir.strpath}) + + # Run as root + m_getuid.return_value = 0 + before = copy.deepcopy(status.status(cfg=cfg)) + + # Replicate an attach by modifying the underlying config and confirm + # that we see different status + other_cfg = FakeConfig.for_attached_machine() + cfg.write_cache("accounts", {"accounts": other_cfg.accounts}) + cfg.write_cache("machine-token", other_cfg.machine_token) + assert status._attached_status(cfg=cfg) != before + + # Run as regular user and confirm that we see the result from + # last time we called .status() + m_getuid.return_value = 1000 + after = status.status(cfg=cfg) + + assert before == after + + @mock.patch("uaclient.status.get_available_resources", return_value=[]) + @mock.patch("uaclient.status.os.getuid", return_value=0) + def test_cache_file_is_written_world_readable( + self, + _m_getuid, + _m_get_available_resources, + _m_should_reboot, + m_remove_notice, + tmpdir, + ): + cfg = UAConfig({"data_dir": tmpdir.strpath}) + status.status(cfg=cfg) + + assert 0o644 == stat.S_IMODE( + os.lstat(cfg.data_path("status-cache")).st_mode + ) + + expected_calls = [ + mock.call( + "", + messages.ENABLE_REBOOT_REQUIRED_TMPL.format( + operation="fix operation" + ), + ) + ] + + assert expected_calls == m_remove_notice.call_args_list + + @pytest.mark.parametrize("show_beta", (True, False)) + @pytest.mark.parametrize( + "features_override", ((None), ({"allow_beta": False})) + ) + @pytest.mark.parametrize( + "entitlements", + ( + [], + [ + { + "type": "support", + "entitled": True, + "affordances": {"supportLevel": "anything"}, + } + ], + ), + ) + @mock.patch("uaclient.status.os.getuid", return_value=0) + @mock.patch( + M_PATH + "fips.FIPSCommonEntitlement.application_status", + return_value=(ApplicationStatus.DISABLED, ""), + ) + @mock.patch( + M_PATH + "livepatch.LivepatchEntitlement.application_status", + return_value=(ApplicationStatus.DISABLED, ""), + ) + @mock.patch(M_PATH + "livepatch.LivepatchEntitlement.user_facing_status") + @mock.patch(M_PATH + "livepatch.LivepatchEntitlement.contract_status") + @mock.patch(M_PATH + "esm.ESMAppsEntitlement.user_facing_status") + @mock.patch(M_PATH + "esm.ESMAppsEntitlement.contract_status") + @mock.patch(M_PATH + "repo.RepoEntitlement.user_facing_status") + @mock.patch(M_PATH + "repo.RepoEntitlement.contract_status") + def test_attached_reports_contract_and_service_status( + self, + m_repo_contract_status, + m_repo_uf_status, + m_esm_contract_status, + m_esm_uf_status, + m_livepatch_contract_status, + m_livepatch_uf_status, + _m_livepatch_status, + _m_fips_status, + _m_getuid, + _m_should_reboot, + m_remove_notice, + entitlements, + features_override, + show_beta, + FakeConfig, + ): + """When attached, return contract and service user-facing status.""" + m_repo_contract_status.return_value = ContractStatus.ENTITLED + m_repo_uf_status.return_value = ( + UserFacingStatus.INAPPLICABLE, + messages.NamedMessage("test-code", "repo details"), + ) + m_livepatch_contract_status.return_value = ContractStatus.ENTITLED + m_livepatch_uf_status.return_value = ( + UserFacingStatus.ACTIVE, + messages.NamedMessage("test-code", "livepatch details"), + ) + m_esm_contract_status.return_value = ContractStatus.ENTITLED + m_esm_uf_status.return_value = ( + UserFacingStatus.ACTIVE, + messages.NamedMessage("test-code", "esm-apps details"), + ) + token = { + "availableResources": ALL_RESOURCES_AVAILABLE, + "machineTokenInfo": { + "machineId": "test_machine_id", + "accountInfo": { + "id": "1", + "name": "accountname", + "createdAt": "2019-06-14T06:45:50Z", + "externalAccountIDs": [{"IDs": ["id1"], "Origin": "AWS"}], + }, + "contractInfo": { + "id": "contract-1", + "name": "contractname", + "createdAt": "2020-05-08T19:02:26Z", + "resourceEntitlements": entitlements, + "products": ["free"], + }, + }, + } + cfg = FakeConfig.for_attached_machine( + account_name="accountname", machine_token=token + ) + if features_override: + cfg.override_features(features_override) + if not entitlements: + support_level = UserFacingStatus.INAPPLICABLE.value + else: + support_level = entitlements[0]["affordances"]["supportLevel"] + expected = copy.deepcopy(status.DEFAULT_STATUS) + expected.update( + { + "version": version.get_version(features=cfg.features), + "attached": True, + "machine_id": "test_machine_id", + "contract": { + "name": "contractname", + "id": "contract-1", + "created_at": datetime.datetime( + 2020, 5, 8, 19, 2, 26, tzinfo=datetime.timezone.utc + ), + "products": ["free"], + "tech_support_level": support_level, + }, + "account": { + "name": "accountname", + "id": "1", + "created_at": datetime.datetime( + 2019, 6, 14, 6, 45, 50, tzinfo=datetime.timezone.utc + ), + "external_account_ids": [ + {"IDs": ["id1"], "Origin": "AWS"} + ], + }, + } + ) + for cls in ENTITLEMENT_CLASSES: + if cls.name == "livepatch": + expected_status = UserFacingStatus.ACTIVE.value + details = "livepatch details" + elif cls.name == "esm-apps": + expected_status = UserFacingStatus.ACTIVE.value + details = "esm-apps details" + else: + expected_status = UserFacingStatus.INAPPLICABLE.value + details = "repo details" + + if self.check_beta(cls, show_beta, cfg, expected_status): + continue + + expected["services"].append( + { + "name": cls.name, + "description": cls.description, + "entitled": ContractStatus.ENTITLED.value, + "status": expected_status, + "status_details": details, + "description_override": None, + "available": mock.ANY, + "blocked_by": [], + } + ) + with mock.patch( + "uaclient.status._get_config_status" + ) as m_get_cfg_status: + m_get_cfg_status.return_value = DEFAULT_CFG_STATUS + assert expected == status.status(cfg=cfg, show_beta=show_beta) + + assert len(ENTITLEMENT_CLASSES) - 2 == m_repo_uf_status.call_count + assert 1 == m_livepatch_uf_status.call_count + + expected_calls = [ + mock.call( + "", + messages.NOTICE_DAEMON_AUTO_ATTACH_LOCK_HELD.format( + operation=".*" + ), + ), + mock.call("", messages.NOTICE_DAEMON_AUTO_ATTACH_FAILED), + mock.call( + "", + messages.ENABLE_REBOOT_REQUIRED_TMPL.format( + operation="fix operation" + ), + ), + ] + + assert expected_calls == m_remove_notice.call_args_list + + @mock.patch("uaclient.status.get_available_resources") + @mock.patch("uaclient.status.os.getuid") + def test_expires_handled_appropriately( + self, + m_getuid, + _m_get_available_resources, + _m_should_reboot, + _m_remove_notice, + FakeConfig, + ): + token = { + "availableResources": ALL_RESOURCES_AVAILABLE, + "machineTokenInfo": { + "machineId": "test_machine_id", + "accountInfo": {"id": "1", "name": "accountname"}, + "contractInfo": { + "name": "contractname", + "id": "contract-1", + "effectiveTo": "2020-07-18T00:00:00Z", + "createdAt": "2020-05-08T19:02:26Z", + "resourceEntitlements": [], + "products": ["free"], + }, + }, + } + cfg = FakeConfig.for_attached_machine( + account_name="accountname", machine_token=token + ) + + # Test that root's status works as expected (including the cache write) + m_getuid.return_value = 0 + expected_dt = datetime.datetime( + 2020, 7, 18, 0, 0, 0, tzinfo=datetime.timezone.utc + ) + assert expected_dt == status.status(cfg=cfg)["expires"] + + # Test that the read from the status cache work properly for non-root + # users + m_getuid.return_value = 1000 + assert expected_dt == status.status(cfg=cfg)["expires"] + + @mock.patch("uaclient.status.os.getuid") + def test_nonroot_user_uses_cache_and_updates_if_available( + self, m_getuid, _m_should_reboot, m_remove_notice, tmpdir + ): + m_getuid.return_value = 1000 + + expected_status = {"pass": True} + cfg = UAConfig({"data_dir": tmpdir.strpath}) + cfg.write_cache("marker-reboot-cmds", "") # To indicate a reboot reqd + cfg.write_cache("status-cache", expected_status) + + # Even non-root users can update execution_status details + details = messages.ENABLE_REBOOT_REQUIRED_TMPL.format( + operation="configuration changes" + ) + reboot_required = UserFacingConfigStatus.REBOOTREQUIRED.value + expected_status.update( + { + "execution_status": reboot_required, + "execution_details": details, + "notices": [], + "config_path": None, + "config": {"data_dir": mock.ANY}, + } + ) + + assert expected_status == status.status(cfg=cfg) + + +ATTACHED_SERVICE_STATUS_PARAMETERS = [ + # ENTITLED => display the given user-facing status + (ContractStatus.ENTITLED, UserFacingStatus.ACTIVE, False, "enabled"), + (ContractStatus.ENTITLED, UserFacingStatus.INACTIVE, False, "disabled"), + (ContractStatus.ENTITLED, UserFacingStatus.INAPPLICABLE, False, "n/a"), + (ContractStatus.ENTITLED, UserFacingStatus.UNAVAILABLE, False, "—"), + # UNENTITLED => UNAVAILABLE + (ContractStatus.UNENTITLED, UserFacingStatus.ACTIVE, False, "—"), + (ContractStatus.UNENTITLED, UserFacingStatus.INACTIVE, False, "—"), + (ContractStatus.UNENTITLED, UserFacingStatus.INAPPLICABLE, False, "—"), + (ContractStatus.UNENTITLED, UserFacingStatus.UNAVAILABLE, [], "—"), + # ENTITLED but in unavailable_resources => INAPPLICABLE + (ContractStatus.ENTITLED, UserFacingStatus.ACTIVE, True, "n/a"), + (ContractStatus.ENTITLED, UserFacingStatus.INACTIVE, True, "n/a"), + (ContractStatus.ENTITLED, UserFacingStatus.INAPPLICABLE, True, "n/a"), + (ContractStatus.ENTITLED, UserFacingStatus.UNAVAILABLE, True, "n/a"), + # UNENTITLED and in unavailable_resources => UNAVAILABLE + (ContractStatus.UNENTITLED, UserFacingStatus.ACTIVE, True, "—"), + (ContractStatus.UNENTITLED, UserFacingStatus.INACTIVE, True, "—"), + (ContractStatus.UNENTITLED, UserFacingStatus.INAPPLICABLE, True, "—"), + (ContractStatus.UNENTITLED, UserFacingStatus.UNAVAILABLE, True, "—"), +] + + +class TestAttachedServiceStatus: + @pytest.mark.parametrize( + "contract_status,uf_status,in_inapplicable_resources,expected_status", + ATTACHED_SERVICE_STATUS_PARAMETERS, + ) + def test_status( + self, + contract_status, + uf_status, + in_inapplicable_resources, + expected_status, + FakeConfig, + ): + ent = mock.MagicMock() + ent.name = "test_entitlement" + ent.contract_status.return_value = contract_status + ent.user_facing_status.return_value = ( + uf_status, + messages.NamedMessage("test-code", ""), + ) + + unavailable_resources = ( + {ent.name: ""} if in_inapplicable_resources else {} + ) + ret = status._attached_service_status(ent, unavailable_resources) + + assert expected_status == ret["status"] + + @pytest.mark.parametrize( + "blocking_incompatible_services, expected_blocked_by", + ( + ([], []), + ( + [ + IncompatibleService( + FIPSEntitlement, messages.NamedMessage("code", "msg") + ) + ], + [{"name": "fips", "reason": "msg", "reason_code": "code"}], + ), + ( + [ + IncompatibleService( + FIPSEntitlement, messages.NamedMessage("code", "msg") + ), + IncompatibleService( + ROSEntitlement, messages.NamedMessage("code2", "msg2") + ), + ], + [ + {"name": "fips", "reason": "msg", "reason_code": "code"}, + {"name": "ros", "reason": "msg2", "reason_code": "code2"}, + ], + ), + ), + ) + def test_blocked_by( + self, + blocking_incompatible_services, + expected_blocked_by, + tmpdir, + FakeConfig, + ): + ent = ConcreteTestEntitlement( + blocking_incompatible_services=blocking_incompatible_services + ) + service_status = status._attached_service_status(ent, []) + assert service_status["blocked_by"] == expected_blocked_by diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_update_messaging.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_update_messaging.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_update_messaging.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_update_messaging.py 2022-05-18 19:44:15.000000000 +0000 @@ -10,6 +10,7 @@ BASE_UA_URL, CONTRACT_EXPIRY_GRACE_PERIOD_DAYS, ) +from uaclient.entitlements.entitlement_status import ApplicationStatus from uaclient.jobs.update_messaging import ( ContractExpiryStatus, ExternalMessage, @@ -29,7 +30,6 @@ DISABLED_MOTD_NO_PKGS_TMPL, UBUNTU_NO_WARRANTY, ) -from uaclient.status import ApplicationStatus M_PATH = "uaclient.jobs.update_messaging." @@ -102,7 +102,7 @@ infra_obj.application_status.return_value = (infra_status, "") infra_obj.name = "esm-infra" - def factory_side_effect(name): + def factory_side_effect(cfg, name): if name == "esm-infra": return infra_cls if name == "esm-apps": @@ -192,7 +192,7 @@ apps_obj = apps_cls.return_value apps_obj.name = "esm-apps" - def factory_side_effect(name): + def factory_side_effect(cfg, name): if name == "esm-infra": return infra_cls if name == "esm-apps": @@ -510,8 +510,6 @@ "series,release,is_active_esm,is_beta,cfg_allow_beta," "apps_enabled,expected", ( - # No ESM announcement when trusty - ("trusty", "14.04", True, False, True, False, None), # ESMApps.is_beta == True no Announcement ("xenial", "16.04", True, True, None, False, None), # Once release begins ESM and ESMApps.is_beta is false announce diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_upgrade_lts_contract.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_upgrade_lts_contract.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_upgrade_lts_contract.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_upgrade_lts_contract.py 2022-05-18 19:44:15.000000000 +0000 @@ -35,38 +35,6 @@ ) @mock.patch("lib.upgrade_lts_contract.parse_os_release") @mock.patch("lib.upgrade_lts_contract.subp") - def test_upgrade_cancel_when_upgrading_to_trusty( - self, m_subp, m_parse_os, m_is_attached, capsys, caplog_text - ): - m_parse_os.return_value = {"VERSION_ID": "14.04"} - - m_subp.return_value = ("", "") - - expected_msgs = [ - "Starting upgrade-lts-contract.", - "Unable to execute upgrade-lts-contract.py on trusty", - ] - expected_logs = ["Check whether to upgrade-lts-contract"] - with pytest.raises(SystemExit) as execinfo: - process_contract_delta_after_apt_lock() - - assert 1 == execinfo.value.code - assert 1 == m_is_attached.call_count - assert 1 == m_parse_os.call_count - assert 1 == m_subp.call_count - out, _err = capsys.readouterr() - assert out == "\n".join(expected_msgs) + "\n" - debug_logs = caplog_text() - for log in expected_msgs + expected_logs: - assert log in debug_logs - - @mock.patch( - "uaclient.config.UAConfig.is_attached", - new_callable=mock.PropertyMock, - return_value=True, - ) - @mock.patch("lib.upgrade_lts_contract.parse_os_release") - @mock.patch("lib.upgrade_lts_contract.subp") def test_upgrade_cancel_when_current_version_not_supported( self, m_subp, m_parse_os, m_is_attached, capsys, caplog_text ): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_util.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_util.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/tests/test_util.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/tests/test_util.py 2022-05-18 19:44:15.000000000 +0000 @@ -65,18 +65,6 @@ UBUNTU_CODENAME=xenial """ -OS_RELEASE_TRUSTY = """\ -NAME="Ubuntu" -VERSION="14.04.5 LTS, Trusty Tahr" -ID=ubuntu -ID_LIKE=debian -PRETTY_NAME="Ubuntu 14.04.5 LTS" -VERSION_ID="14.04" -HOME_URL="http://www.ubuntu.com/" -SUPPORT_URL="http://help.ubuntu.com/" -BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" -""" - class TestGetDictDeltas: @pytest.mark.parametrize( @@ -125,9 +113,8 @@ @pytest.mark.parametrize( "series, supported_esm, expected", ( - ("trusty", "trusty\nxenial\nbionic\nfocal", True), - ("xenial", "trusty\nxenial\nbionic\nfocal", True), - ("groovy", "trusty\nxenial\nbionic\nfocal", False), + ("xenial", "xenial\nbionic\nfocal", True), + ("groovy", "xenial\nbionic\nfocal", False), ), ) @mock.patch("uaclient.util.subp") @@ -146,7 +133,6 @@ @pytest.mark.parametrize( "series, is_lts, days_until_esm,expected", ( - ("trusty", True, 0, True), ("xenial", True, 1, False), ("xenial", True, 0, True), ("bionic", True, 1, False), @@ -167,7 +153,7 @@ # Use __wrapped__ to avoid hitting the lru_cached value across tests calls = [] - if is_lts and series != "trusty": + if is_lts: calls.append( mock.call( [ @@ -295,22 +281,17 @@ with pytest.raises(exceptions.ProcessExecutionError) as excinfo: util.subp(["ls", "--bogus"]) - expected_errors = [ - "Failed running command 'ls --bogus' [exit(2)].", - "ls: unrecognized option '--bogus'", - ] - for msg in expected_errors: - assert msg in str(excinfo.value) + assert 2 == excinfo.value.exit_code + assert "" == excinfo.value.stdout assert 0 == m_sleep.call_count # no retries @mock.patch("uaclient.util.time.sleep") def test_no_error_on_accepted_return_codes(self, m_sleep, _subp): """When rcs list includes the exit code, do not raise an error.""" with mock.patch("uaclient.util._subp", side_effect=_subp): - out, err = util.subp(["ls", "--bogus"], rcs=[2]) + out, _ = util.subp(["ls", "--bogus"], rcs=[2]) assert "" == out - assert "ls: unrecognized option '--bogus'" in err assert 0 == m_sleep.call_count # no retries @mock.patch("uaclient.util.time.sleep") @@ -386,17 +367,19 @@ def test_parse_os_release(self, caplog_text, tmpdir): """parse_os_release returns a dict of values from /etc/os-release.""" release_file = tmpdir.join("os-release") - release_file.write(OS_RELEASE_TRUSTY) + release_file.write(OS_RELEASE_XENIAL) expected = { "BUG_REPORT_URL": "http://bugs.launchpad.net/ubuntu/", "HOME_URL": "http://www.ubuntu.com/", "ID": "ubuntu", "ID_LIKE": "debian", "NAME": "Ubuntu", - "PRETTY_NAME": "Ubuntu 14.04.5 LTS", + "PRETTY_NAME": "Ubuntu 16.04.5 LTS", "SUPPORT_URL": "http://help.ubuntu.com/", - "VERSION": "14.04.5 LTS, Trusty Tahr", - "VERSION_ID": "14.04", + "UBUNTU_CODENAME": "xenial", + "VERSION": "16.04.5 LTS (Xenial Xerus)", + "VERSION_CODENAME": "xenial", + "VERSION_ID": "16.04", } assert expected == util.parse_os_release(release_file.strpath) # Add a 2nd call for lru_cache test @@ -432,7 +415,6 @@ @pytest.mark.parametrize( "series,release,version,os_release_content", [ - ("trusty", "14.04", "14.04 LTS (Trusty Tahr)", OS_RELEASE_TRUSTY), ("xenial", "16.04", "16.04 LTS (Xenial Xerus)", OS_RELEASE_XENIAL), ( "bionic", @@ -739,7 +721,7 @@ def test_get_machine_id_from_var_lib_dbus_machine_id( self, FakeConfig, tmpdir ): - """On trusty, machine id lives in of /var/lib/dbus/machine-id.""" + """fallback to /var/lib/dbus/machine-id""" etc_machine_id = tmpdir.join("etc-machine-id") dbus_machine_id = tmpdir.join("dbus-machine-id") assert "/var/lib/dbus/machine-id" == util.DBUS_MACHINE_ID diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/util.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/util.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/util.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/util.py 2022-05-18 19:44:15.000000000 +0000 @@ -26,7 +26,7 @@ from urllib import error, request from urllib.parse import urlparse -from uaclient import event_logger, exceptions, messages, status +from uaclient import event_logger, exceptions, messages from uaclient.types import MessagingOperations REBOOT_FILE_CHECK_PATH = "/var/run/reboot-required" @@ -139,7 +139,7 @@ def apply_contract_overrides( - orig_access: Dict[str, Any], series: str = None + orig_access: Dict[str, Any], series: Optional[str] = None ) -> None: """Apply series-specific overrides to an entitlement dict. @@ -299,8 +299,6 @@ def get_machine_id(cfg) -> str: """Get system's unique machine-id or create our own in data_dir.""" # Generate, cache our own uuid if not present in config or on the system - # Docker images do not define ETC_MACHINE_ID or DBUS_MACHINE_ID on trusty - # per Issue: #489 if cfg.machine_token: cfg_machine_id = cfg.machine_token.get("machineTokenInfo", {}).get( @@ -336,10 +334,7 @@ } version = os_release["VERSION"] - if ", " in version: - # Fix up trusty's version formatting - version = "{} ({})".format(*version.split(", ")) - # Strip off an LTS point release (14.04.1 LTS -> 14.04 LTS) + # Strip off an LTS point release (20.04.1 LTS -> 20.04 LTS) version = re.sub(r"\.\d LTS", " LTS", version) platform_info["version"] = version @@ -372,12 +367,16 @@ @lru_cache(maxsize=None) +def is_current_series_lts() -> bool: + series = get_platform_info()["series"] + return is_lts(series) + + +@lru_cache(maxsize=None) def is_active_esm(series: str) -> bool: """Return True when Ubuntu series supports ESM and is actively in ESM.""" if not is_lts(series): return False - if series == "trusty": - return True # Trusty doesn't have a --series param out, _err = subp( ["/usr/bin/ubuntu-distro-info", "--series", series, "-yeol"] ) @@ -493,7 +492,7 @@ if assume_yes: return True if not msg: - msg = status.PROMPT_YES_NO + msg = messages.PROMPT_YES_NO value = input(msg).lower().strip() if value == "": return default @@ -547,6 +546,7 @@ headers: Dict[str, str] = {}, method: Optional[str] = None, timeout: Optional[int] = None, + potentially_sensitive: bool = True, ) -> Tuple[Any, Union[HTTPMessage, Mapping[str, str]]]: if data and not method: method = "POST" @@ -578,13 +578,13 @@ sorted_header_str = ", ".join( ["'{}': '{}'".format(k, resp.headers[k]) for k in sorted(resp.headers)] ) - logging.debug( - redact_sensitive_logs( - "URL [{}] response: {}, headers: {{{}}}, data: {}".format( - method or "GET", url, sorted_header_str, content - ) - ) + debug_msg = "URL [{}] response: {}, headers: {{{}}}, data: {}".format( + method or "GET", url, sorted_header_str, content ) + if potentially_sensitive: + # For large responses, this is very slow (several minutes) + debug_msg = redact_sensitive_logs(debug_msg) + logging.debug(debug_msg) if http_error_found: raise resp return content, resp.headers @@ -868,7 +868,9 @@ return False -def handle_message_operations(msg_ops: Optional[MessagingOperations],) -> bool: +def handle_message_operations( + msg_ops: Optional[MessagingOperations], +) -> bool: """Emit messages to the console for user interaction :param msg_op: A list of strings or tuples. Any string items are printed. diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient/version.py ubuntu-advantage-tools-27.9~16.04.1/uaclient/version.py --- ubuntu-advantage-tools-27.8~16.04.1/uaclient/version.py 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient/version.py 2022-05-18 19:44:15.000000000 +0000 @@ -8,7 +8,7 @@ from uaclient import exceptions, util -__VERSION__ = "27.8" +__VERSION__ = "27.9" PACKAGED_VERSION = "@@PACKAGED_VERSION@@" VERSION_TMPL = "{version}{feature_suffix}" @@ -16,13 +16,13 @@ def get_version(_args=None, features={}): """Return the packaged version as a string - Prefer the binary PACKAGED_VESION set by debian/rules to DEB_VERSION. - If unavailable, check for a .git development environments: - a. If run in our upstream repo `git describe` will gives a leading - XX.Y so return the --long version to allow daily build recipes - to count commit offset from upstream's XX.Y signed tag. - b. If run in a git-ubuntu pkg repo, upstream tags aren't visible, - parse the debian/changelog in that case + Prefer the binary PACKAGED_VESION set by debian/rules to DEB_VERSION. + If unavailable, check for a .git development environments: + a. If run in our upstream repo `git describe` will gives a leading + XX.Y so return the --long version to allow daily build recipes + to count commit offset from upstream's XX.Y signed tag. + b. If run in a git-ubuntu pkg repo, upstream tags aren't visible, + parse the debian/changelog in that case """ feature_suffix = "" for key, value in sorted(features.items()): diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient.conf ubuntu-advantage-tools-27.9~16.04.1/uaclient.conf --- ubuntu-advantage-tools-27.8~16.04.1/uaclient.conf 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient.conf 2022-05-18 19:44:15.000000000 +0000 @@ -8,6 +8,7 @@ log_level: debug security_url: https://ubuntu.com/security timer_log_file: /var/log/ubuntu-advantage-timer.log +daemon_log_file: /var/log/ubuntu-advantage-daemon.log ua_config: apt_http_proxy: null apt_https_proxy: null diff -Nru ubuntu-advantage-tools-27.8~16.04.1/uaclient-devel.conf ubuntu-advantage-tools-27.9~16.04.1/uaclient-devel.conf --- ubuntu-advantage-tools-27.8~16.04.1/uaclient-devel.conf 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/uaclient-devel.conf 2022-05-18 19:44:15.000000000 +0000 @@ -5,7 +5,7 @@ log_level: debug security_url: https://ubuntu.com/security timer_log_file: ubuntu-advantage-timer-devel.log -license_check_log_file: ubuntu-advantage-license-check-devel.log +daemon_log_file: ubuntu-advantage-daemon-devel.log ua_config: apt_http_proxy: null apt_https_proxy: null diff -Nru ubuntu-advantage-tools-27.8~16.04.1/ubuntu-advantage.1 ubuntu-advantage-tools-27.9~16.04.1/ubuntu-advantage.1 --- ubuntu-advantage-tools-27.8~16.04.1/ubuntu-advantage.1 2022-04-14 18:32:30.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/ubuntu-advantage.1 2022-05-18 19:44:15.000000000 +0000 @@ -136,6 +136,15 @@ .B version Show version of the Ubuntu Advantage package. +.SH PRO UPGRADE DAEMON +UA client sets up a daemon on supported platforms (currently GCP only) to +detect if an Ubuntu Pro license is purchased for the machine. If a Pro license +is detected, then the machine is automatically attached. +If you are uninterested in UA services, you can safely stop and disable the +daemon using systemctl: + +sudo systemctl stop ubuntu-advantage.service +sudo systemctl disable ubuntu-advantage.service .SH TIMER JOBS UA client sets up a systemd timer to run jobs that need to be executed @@ -161,13 +170,6 @@ Makes sure the `ua status` command will have the latest information even when executed by a non-root user, updating the \fB/var/lib/ubuntu-advantage/status.json\fP file. -.TP -.B -\fBgcp_auto_attach\fP -Only operable on Google Cloud Platform (GCP) generic Ubuntu VMs without an -active Ubuntu Advantage license. It polls GCP metadata every 5 minutes to -discover if a license has been attached to the VM through Google Cloud and will -perform `ua auto-attach` in that case. .SH CONFIGURATION @@ -201,8 +203,8 @@ The log file for the Ubuntu Advantage timer and timer jobs .TP .B -\fBlicense_check_log_file\fP -The log file for the Ubuntu Advantage license check job (only used on GCP) +\fBdaemon_log_file\fP +The log file for the Ubuntu Advantage daemon .P \fBThe following options must be nested under the "ua_config" key:\fP @@ -246,7 +248,7 @@ by setting an environment variable prefaced by \fBUA_\fP. Both uppercase and lowercase environment variables are allowed. The configuration options that support this are: data_dir, log_file, timer_log_file, -license_check_log_file, log_level, and security_url. +daemon_log_file, log_level, and security_url. For example, the following overrides the log_level found in uaclient.conf: .PP diff -Nru ubuntu-advantage-tools-27.8~16.04.1/upstart/ua-auto-attach.conf ubuntu-advantage-tools-27.9~16.04.1/upstart/ua-auto-attach.conf --- ubuntu-advantage-tools-27.8~16.04.1/upstart/ua-auto-attach.conf 2020-10-15 14:52:16.000000000 +0000 +++ ubuntu-advantage-tools-27.9~16.04.1/upstart/ua-auto-attach.conf 1970-01-01 00:00:00.000000000 +0000 @@ -1,12 +0,0 @@ -# Ubuntu Advantage auto attach - -# This task is run on boot to automatically attach to Ubuntu Advantage -# for select images. - -description "Ubuntu Advantage auto attach" - -# Only start once cloud-init finishes detecting the DataSource on trusty -start on file FILE=/var/lib/cloud/data/result.json - -task -exec /usr/bin/ua auto-attach