diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/bindep.txt ansible-5.2.0/ansible_collections/amazon/aws/bindep.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/bindep.txt 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/bindep.txt 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,4 @@
+# Needed by the ec2_key integration tests (generating EC2 format fingerprint)
+openssl [test platform:rpm]
+gcc [test platform:rpm]
+python3-devel [test platform:rpm]
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/CHANGELOG.rst ansible-5.2.0/ansible_collections/amazon/aws/CHANGELOG.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/CHANGELOG.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/CHANGELOG.rst 2021-11-12 18:13:53.000000000 +0000
@@ -5,19 +5,158 @@
.. contents:: Topics
-v1.5.1
+v2.1.0
======
Minor Changes
-------------
+- aws_service_ip_ranges - add new option ``ipv6_prefixes`` to get only IPV6 addresses and prefixes for Amazon services (https://github.com/ansible-collections/amazon.aws/pull/430)
+- cloudformation - fix detection when there are no changes. Sometimes when there are no changes, the change set will have a status FAILED with StatusReason No updates are to be performed (https://github.com/ansible-collections/amazon.aws/pull/507).
+- ec2_ami - add check_mode support (https://github.com/ansible-collections/amazon.aws/pull/516).
+- ec2_ami - use module_util helper for tagging AMIs (https://github.com/ansible-collections/amazon.aws/pull/520).
+- ec2_ami - when creating an AMI from an instance pass the tagging options at creation time (https://github.com/ansible-collections/amazon.aws/pull/551).
+- ec2_elb_lb - module renamed to ``elb_classic_lb`` (https://github.com/ansible-collections/amazon.aws/pull/377).
+- ec2_eni - add check mode support (https://github.com/ansible-collections/amazon.aws/pull/534).
+- ec2_eni - use module_util helper for tagging ENIs (https://github.com/ansible-collections/amazon.aws/pull/522).
+- ec2_instance - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/527).
+- ec2_key - add support for tagging key pairs (https://github.com/ansible-collections/amazon.aws/pull/548).
+- ec2_snapshot - add check_mode support (https://github.com/ansible-collections/amazon.aws/pull/512).
+- ec2_vol - add check_mode support (https://github.com/ansible-collections/amazon.aws/pull/509).
+- ec2_vpc_dhcp_option - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+- ec2_vpc_endpoint - added ``vpc_endpoint_security_groups`` parameter to support defining the security group attached to an interface endpoint (https://github.com/ansible-collections/amazon.aws/pull/544).
+- ec2_vpc_endpoint - added ``vpc_endpoint_subnets`` parameter to support defining the subnet attached to an interface or gateway endpoint (https://github.com/ansible-collections/amazon.aws/pull/544).
+- ec2_vpc_endpoint - use module_util helper for tagging (https://github.com/ansible-collections/amazon.aws/pull/525).
+- ec2_vpc_endpoint - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+- ec2_vpc_igw - use module_util helper for tagging (https://github.com/ansible-collections/amazon.aws/pull/523).
+- ec2_vpc_igw - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+- ec2_vpc_nat_gateway - use module_util helper for tagging (https://github.com/ansible-collections/amazon.aws/pull/524).
+- ec2_vpc_nat_gateway - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+- elb_classic_lb - added retries on common AWS temporary API failures (https://github.com/ansible-collections/amazon.aws/pull/377).
+- elb_classic_lb - added support for check_mode (https://github.com/ansible-collections/amazon.aws/pull/377).
+- elb_classic_lb - added support for wait during creation (https://github.com/ansible-collections/amazon.aws/pull/377).
+- elb_classic_lb - added support for wait during instance addition and removal (https://github.com/ansible-collections/amazon.aws/pull/377).
+- elb_classic_lb - migrated to boto3 SDK (https://github.com/ansible-collections/amazon.aws/pull/377).
+- elb_classic_lb - various error messages changed due to refactor (https://github.com/ansible-collections/amazon.aws/pull/377).
+- module_utils.ec2 - moved generic tagging helpers into module_utils.tagging (https://github.com/ansible-collections/amazon.aws/pull/527).
+- module_utils.tagging - add new helper to generate TagSpecification lists (https://github.com/ansible-collections/amazon.aws/pull/527).
+
+Deprecated Features
+-------------------
+
+- ec2_classic_lb - setting of the ``ec2_elb`` fact has been deprecated and will be removed in release 4.0.0 of the collection. The module now returns ``elb`` which can be accessed using the register keyword (https://github.com/ansible-collections/amazon.aws/pull/552).
+
+Bugfixes
+--------
+
+- AWS action group - added missing ``ec2_instance_facts`` entry (https://github.com/ansible-collections/amazon.aws/issues/557)
+- ec2_ami - fix problem when creating an AMI from an instance with ephemeral volumes (https://github.com/ansible-collections/amazon.aws/issues/511).
+- ec2_instance - ensure that ec2_instance falls back to the tag(Name) parameter when no filter and no name parameter is passed (https://github.com/ansible-collections/amazon.aws/issues/526).
+- s3_bucket - update error handling to better support DigitalOcean Space (https://github.com/ansible-collections/amazon.aws/issues/508).
+
+v2.0.0
+======
+
+Major Changes
+-------------
+
+- amazon.aws collection - Due to the AWS SDKs announcing the end of support for Python less than 3.6 (https://boto3.amazonaws.com/v1/documentation/api/1.17.64/guide/migrationpy3.html) this collection now requires Python 3.6+ (https://github.com/ansible-collections/amazon.aws/pull/298).
+- amazon.aws collection - The amazon.aws collection has dropped support for ``botocore<1.18.0`` and ``boto3<1.15.0``. Most modules will continue to work with older versions of the AWS SDK, however compatability with older versions of the SDK is not guaranteed and will not be tested. When using older versions of the SDK a warning will be emitted by Ansible (https://github.com/ansible-collections/amazon.aws/pull/502).
+- ec2_instance - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_instance``.
+- ec2_instance_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_instance_info``.
+- ec2_vpc_endpoint - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint``.
+- ec2_vpc_endpoint_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
+- ec2_vpc_endpoint_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
+- ec2_vpc_endpoint_service_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_service_info``.
+- ec2_vpc_igw - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw``.
+- ec2_vpc_igw_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw_facts``.
+- ec2_vpc_igw_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw_info``.
+- ec2_vpc_nat_gateway - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway``.
+- ec2_vpc_nat_gateway_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
+- ec2_vpc_nat_gateway_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
+- ec2_vpc_route_table - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_route_table``.
+- ec2_vpc_route_table_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_route_table_facts``.
+- ec2_vpc_route_table_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_route_table_info``.
+
+Minor Changes
+-------------
+
+- aws_ec2 - use a generator rather than list comprehension (https://github.com/ansible-collections/amazon.aws/pull/465).
+- aws_s3 - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- aws_s3 - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- aws_s3 - add ``tags`` and ``purge_tags`` features for an S3 object (https://github.com/ansible-collections/amazon.aws/pull/335)
+- aws_s3 - new mode to copy existing on another bucket (https://github.com/ansible-collections/amazon.aws/pull/359).
+- aws_secret - added support for gracefully handling deleted secrets (https://github.com/ansible-collections/amazon.aws/pull/455).
+- aws_ssm - add "on_missing" and "on_denied" option (https://github.com/ansible-collections/amazon.aws/pull/370).
+- cloudformation - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- cloudformation - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_ami - ensure tags are propagated to the snapshot(s) when creating an AMI (https://github.com/ansible-collections/amazon.aws/pull/437).
+- ec2_eni - fix idempotency when ``security_groups`` attribute is specified (https://github.com/ansible-collections/amazon.aws/pull/337).
+- ec2_eni - timeout increased when waiting for ENIs to finish detaching (https://github.com/ansible-collections/amazon.aws/pull/501).
+- ec2_group - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_group - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_group - use a generator rather than list comprehension (https://github.com/ansible-collections/amazon.aws/pull/465).
+- ec2_group - use system ipaddress module, available with Python >= 3.3, instead of vendored copy (https://github.com/ansible-collections/amazon.aws/pull/461).
+- ec2_instance - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_instance - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_instance - add ``throughput`` parameter for gp3 volume types (https://github.com/ansible-collections/amazon.aws/pull/433).
+- ec2_instance - add support for controlling metadata options (https://github.com/ansible-collections/amazon.aws/pull/414).
- ec2_instance - remove unnecessary raise when exiting with a failure (https://github.com/ansible-collections/amazon.aws/pull/460).
+- ec2_instance_info - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_instance_info - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_snapshot - migrated to use the boto3 python library (https://github.com/ansible-collections/amazon.aws/pull/356).
+- ec2_spot_instance_info - Added a new module that describes the specified Spot Instance requests (https://github.com/ansible-collections/amazon.aws/pull/487).
+- ec2_vol - add parameter ``multi_attach`` to support Multi-Attach on volume creation/update (https://github.com/ansible-collections/amazon.aws/pull/362).
+- ec2_vol - relax the boto3/botocore requirements and only require botocore 1.19.27 for modifying the ``throughput`` parameter (https://github.com/ansible-collections/amazon.aws/pull/346).
+- ec2_vpc_dhcp_option - Now also returns a boto3-style resource description in the ``dhcp_options`` result key. This includes any tags for the ``dhcp_options_id`` and has the same format as the current return value of ``ec2_vpc_dhcp_option_info``. (https://github.com/ansible-collections/amazon.aws/pull/252)
+- ec2_vpc_dhcp_option_info - Now also returns a user-friendly ``dhcp_config`` key that matches the historical ``new_config`` key from ec2_vpc_dhcp_option, and alleviates the need to use ``items2dict(key_name='key', value_name='values')`` when parsing the output of the module. (https://github.com/ansible-collections/amazon.aws/pull/252)
+- ec2_vpc_subnet - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- ec2_vpc_subnet - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- integration tests - remove dependency with collection ``community.general`` (https://github.com/ansible-collections/amazon.aws/pull/361).
+- module_utils/waiter - add RDS cluster ``cluster_available`` waiter (https://github.com/ansible-collections/amazon.aws/pull/464).
+- module_utils/waiter - add RDS cluster ``cluster_deleted`` waiter (https://github.com/ansible-collections/amazon.aws/pull/464).
+- module_utils/waiter - add Route53 ``resource_record_sets_changed`` waiter (https://github.com/ansible-collections/amazon.aws/pull/350).
+- s3_bucket - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- s3_bucket - Tests for compatability with older versions of the AWS SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+- s3_bucket - add new option ``object_ownership`` to configure object ownership (https://github.com/ansible-collections/amazon.aws/pull/311)
+- s3_bucket - updated to use HeadBucket instead of ListBucket when testing for bucket existence (https://github.com/ansible-collections/amazon.aws/pull/357).
+
+Breaking Changes / Porting Guide
+--------------------------------
+
+- ec2_instance - instance wait for state behaviour has changed. If plays require the old behavior of waiting for the instance monitoring status to become ``OK`` when launching a new instance, the action will need to specify ``state: started`` (https://github.com/ansible-collections/amazon.aws/pull/481).
+- ec2_snapshot - support for waiting indefinitely has been dropped, new default is 10 minutes (https://github.com/ansible-collections/amazon.aws/pull/356).
+- ec2_vol_info - return ``attachment_set`` is now a list of attachments with Multi-Attach support on disk. (https://github.com/ansible-collections/amazon.aws/pull/362).
+- ec2_vpc_dhcp_option - The module has been refactored to use boto3. Keys and value types returned by the module are now consistent, which is a change from the previous behaviour. A ``purge_tags`` option has been added, which defaults to ``True``. (https://github.com/ansible-collections/amazon.aws/pull/252)
+- ec2_vpc_dhcp_option_info - Now preserves case for tag keys in return value. (https://github.com/ansible-collections/amazon.aws/pull/252)
+- module_utils.core - The boto3 switch has been removed from the region parameter (https://github.com/ansible-collections/amazon.aws/pull/287).
+- module_utils/compat - vendored copy of ipaddress removed (https://github.com/ansible-collections/amazon.aws/pull/461).
+- module_utils/core - updated the ``scrub_none_parameters`` function so that ``descend_into_lists`` is set to ``True`` by default (https://github.com/ansible-collections/amazon.aws/pull/297).
+
+Deprecated Features
+-------------------
+
+- ec2 - the boto based ``ec2`` module has been deprecated in favour of the boto3 based ``ec2_instance`` module. The ``ec2`` module will be removed in release 4.0.0 (https://github.com/ansible-collections/amazon.aws/pull/424).
+- ec2_vpc_dhcp_option - The ``new_config`` return key has been deprecated and will be removed in a future release. It will be replaced by ``dhcp_config``. Both values are returned in the interim. (https://github.com/ansible-collections/amazon.aws/pull/252)
Bugfixes
--------
+- aws_s3 - Fix upload permission when an S3 bucket ACL policy requires a particular canned ACL (https://github.com/ansible-collections/amazon.aws/pull/318)
+- ec2_ami - Fix ami issue when creating an ami with no_device parameter (https://github.com/ansible-collections/amazon.aws/pull/386)
+- ec2_instance - ``ec2_instance`` was waiting on EC2 instance monitoring status to be ``OK`` when launching a new instance. This could cause a play to wait multiple minutes for AWS's monitoring to complete status checks (https://github.com/ansible-collections/amazon.aws/pull/481).
+- ec2_snapshot - Fix snapshot issue when capturing a snapshot of a volume without tags (https://github.com/ansible-collections/amazon.aws/pull/383)
- ec2_vol - Fixes ``changed`` status when ``modify_volume`` is used, but no new disk is being attached. The module incorrectly reported that no change had occurred even when disks had been modified (iops, throughput, type, etc.). (https://github.com/ansible-collections/amazon.aws/issues/482).
- ec2_vol - fix iops setting and enforce iops/throughput parameters usage (https://github.com/ansible-collections/amazon.aws/pull/334)
+- inventory - ``include_filters`` won't be ignored anymore if ``filters`` is not set (https://github.com/ansible-collections/amazon.aws/issues/457).
+- s3_bucket - Fix error handling when attempting to set a feature that is not implemented (https://github.com/ansible-collections/amazon.aws/pull/391).
+- s3_bucket - Gracefully handle ``NotImplemented`` exceptions when fetching encryption settings (https://github.com/ansible-collections/amazon.aws/issues/390).
+
+New Modules
+-----------
+
+- ec2_spot_instance - request, stop, reboot or cancel spot instance
+- ec2_spot_instance_info - Gather information about ec2 spot instance requests
v1.5.0
======
@@ -54,6 +193,14 @@
- ec2_vol - create or update now preserves the existing tags, including Name (https://github.com/ansible-collections/amazon.aws/issues/229)
- ec2_vol - fix exception when platform information isn't available (https://github.com/ansible-collections/amazon.aws/issues/305).
+v1.4.1
+======
+
+Minor Changes
+-------------
+
+- module_utils - the ipaddress module utility has been vendored into this collection. This eliminates the collection dependency on ansible.netcommon (which had removed the library in its 2.0 release). The ipaddress library is provided for internal use in this collection only. (https://github.com/ansible-collections/amazon.aws/issues/273)-
+
v1.4.0
======
@@ -66,7 +213,6 @@
- aws_secret - add ``bypath`` functionality (https://github.com/ansible-collections/amazon.aws/pull/192).
- ec2_key - add AWSRetry decorator to automatically retry on common temporary failures (https://github.com/ansible-collections/amazon.aws/pull/213).
- ec2_vol - Add support for gp3 volumes and support for modifying existing volumes (https://github.com/ansible-collections/amazon.aws/issues/55).
-- module_utils - the ipaddress module utility has been vendored into this collection. This eliminates the collection dependency on ansible.netcommon (which had removed the library in its 2.0 release). The ipaddress library is provided for internal use in this collection only. (https://github.com/ansible-collections/amazon.aws/issues/273)-
- module_utils/elbv2 - add logic to compare_rules to suit Values list nested within dicts unique to each field type. Fixes issue (https://github.com/ansible-collections/amazon.aws/issues/187)
- various AWS plugins and module_utils - Cleanup unused imports (https://github.com/ansible-collections/amazon.aws/pull/217).
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/changelogs/changelog.yaml ansible-5.2.0/ansible_collections/amazon/aws/changelogs/changelog.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/changelogs/changelog.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/changelogs/changelog.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -214,10 +214,6 @@
failures (https://github.com/ansible-collections/amazon.aws/pull/213).
- ec2_vol - Add support for gp3 volumes and support for modifying existing volumes
(https://github.com/ansible-collections/amazon.aws/issues/55).
- - module_utils - the ipaddress module utility has been vendored into this collection. This
- eliminates the collection dependency on ansible.netcommon (which had removed
- the library in its 2.0 release). The ipaddress library is provided for internal
- use in this collection only. (https://github.com/ansible-collections/amazon.aws/issues/273)-
- module_utils/elbv2 - add logic to compare_rules to suit Values list nested
within dicts unique to each field type. Fixes issue (https://github.com/ansible-collections/amazon.aws/issues/187)
- various AWS plugins and module_utils - Cleanup unused imports (https://github.com/ansible-collections/amazon.aws/pull/217).
@@ -231,6 +227,15 @@
- 237_replace_inverse_ec2_aws_filter.yaml
- 241_ec2_vol-returns-an-up-to-date-tag-dict-of-the-volume.yaml
- 25-aws_ec2-hostname-options-concatenation.yaml
+ release_date: '2021-02-05'
+ 1.4.1:
+ changes:
+ minor_changes:
+ - module_utils - the ipaddress module utility has been vendored into this collection. This
+ eliminates the collection dependency on ansible.netcommon (which had removed
+ the library in its 2.0 release). The ipaddress library is provided for internal
+ use in this collection only. (https://github.com/ansible-collections/amazon.aws/issues/273)-
+ fragments:
- 273-vendor-ipaddress-utility.yml
release_date: '2021-03-05'
1.5.0:
@@ -292,18 +297,300 @@
- 57-aws_ec2-support-for-templates.yml
- ignore_212.yml
release_date: '2021-04-27'
- 1.5.1:
+ 2.0.0:
changes:
+ breaking_changes:
+ - 'ec2_instance - instance wait for state behaviour has changed. If plays require
+ the old behavior of waiting for the instance monitoring status to become ``OK``
+ when launching a new instance, the action will need to specify ``state: started``
+ (https://github.com/ansible-collections/amazon.aws/pull/481).'
+ - ec2_snapshot - support for waiting indefinitely has been dropped, new default
+ is 10 minutes (https://github.com/ansible-collections/amazon.aws/pull/356).
+ - ec2_vol_info - return ``attachment_set`` is now a list of attachments with
+ Multi-Attach support on disk. (https://github.com/ansible-collections/amazon.aws/pull/362).
+ - ec2_vpc_dhcp_option - The module has been refactored to use boto3. Keys and
+ value types returned by the module are now consistent, which is a change from
+ the previous behaviour. A ``purge_tags`` option has been added, which defaults
+ to ``True``. (https://github.com/ansible-collections/amazon.aws/pull/252)
+ - ec2_vpc_dhcp_option_info - Now preserves case for tag keys in return value.
+ (https://github.com/ansible-collections/amazon.aws/pull/252)
+ - module_utils.core - The boto3 switch has been removed from the region parameter
+ (https://github.com/ansible-collections/amazon.aws/pull/287).
+ - module_utils/compat - vendored copy of ipaddress removed (https://github.com/ansible-collections/amazon.aws/pull/461).
+ - module_utils/core - updated the ``scrub_none_parameters`` function so that
+ ``descend_into_lists`` is set to ``True`` by default (https://github.com/ansible-collections/amazon.aws/pull/297).
bugfixes:
+ - aws_s3 - Fix upload permission when an S3 bucket ACL policy requires a particular
+ canned ACL (https://github.com/ansible-collections/amazon.aws/pull/318)
+ - ec2_ami - Fix ami issue when creating an ami with no_device parameter (https://github.com/ansible-collections/amazon.aws/pull/386)
+ - ec2_instance - ``ec2_instance`` was waiting on EC2 instance monitoring status
+ to be ``OK`` when launching a new instance. This could cause a play to wait
+ multiple minutes for AWS's monitoring to complete status checks (https://github.com/ansible-collections/amazon.aws/pull/481).
+ - ec2_snapshot - Fix snapshot issue when capturing a snapshot of a volume without
+ tags (https://github.com/ansible-collections/amazon.aws/pull/383)
- ec2_vol - Fixes ``changed`` status when ``modify_volume`` is used, but no
new disk is being attached. The module incorrectly reported that no change
had occurred even when disks had been modified (iops, throughput, type, etc.).
(https://github.com/ansible-collections/amazon.aws/issues/482).
- ec2_vol - fix iops setting and enforce iops/throughput parameters usage (https://github.com/ansible-collections/amazon.aws/pull/334)
+ - inventory - ``include_filters`` won't be ignored anymore if ``filters`` is
+ not set (https://github.com/ansible-collections/amazon.aws/issues/457).
+ - s3_bucket - Fix error handling when attempting to set a feature that is not
+ implemented (https://github.com/ansible-collections/amazon.aws/pull/391).
+ - s3_bucket - Gracefully handle ``NotImplemented`` exceptions when fetching
+ encryption settings (https://github.com/ansible-collections/amazon.aws/issues/390).
+ deprecated_features:
+ - ec2 - the boto based ``ec2`` module has been deprecated in favour of the boto3
+ based ``ec2_instance`` module. The ``ec2`` module will be removed in release
+ 4.0.0 (https://github.com/ansible-collections/amazon.aws/pull/424).
+ - ec2_vpc_dhcp_option - The ``new_config`` return key has been deprecated and
+ will be removed in a future release. It will be replaced by ``dhcp_config``. Both
+ values are returned in the interim. (https://github.com/ansible-collections/amazon.aws/pull/252)
+ major_changes:
+ - amazon.aws collection - Due to the AWS SDKs announcing the end of support
+ for Python less than 3.6 (https://boto3.amazonaws.com/v1/documentation/api/1.17.64/guide/migrationpy3.html)
+ this collection now requires Python 3.6+ (https://github.com/ansible-collections/amazon.aws/pull/298).
+ - amazon.aws collection - The amazon.aws collection has dropped support for
+ ``botocore<1.18.0`` and ``boto3<1.15.0``. Most modules will continue to work
+ with older versions of the AWS SDK, however compatability with older versions
+ of the SDK is not guaranteed and will not be tested. When using older versions
+ of the SDK a warning will be emitted by Ansible (https://github.com/ansible-collections/amazon.aws/pull/502).
+ - ec2_instance - The module has been migrated from the ``community.aws`` collection.
+ Playbooks using the Fully Qualified Collection Name for this module should
+ be updated to use ``amazon.aws.ec2_instance``.
+ - ec2_instance_info - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_instance_info``.
+ - ec2_vpc_endpoint - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_endpoint``.
+ - ec2_vpc_endpoint_facts - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
+ - ec2_vpc_endpoint_info - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
+ - ec2_vpc_endpoint_service_info - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_endpoint_service_info``.
+ - ec2_vpc_igw - The module has been migrated from the ``community.aws`` collection.
+ Playbooks using the Fully Qualified Collection Name for this module should
+ be updated to use ``amazon.aws.ec2_vpc_igw``.
+ - ec2_vpc_igw_facts - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_igw_facts``.
+ - ec2_vpc_igw_info - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_igw_info``.
+ - ec2_vpc_nat_gateway - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_nat_gateway``.
+ - ec2_vpc_nat_gateway_facts - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
+ - ec2_vpc_nat_gateway_info - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
+ - ec2_vpc_route_table - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_route_table``.
+ - ec2_vpc_route_table_facts - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_route_table_facts``.
+ - ec2_vpc_route_table_info - The module has been migrated from the ``community.aws``
+ collection. Playbooks using the Fully Qualified Collection Name for this module
+ should be updated to use ``amazon.aws.ec2_vpc_route_table_info``.
minor_changes:
+ - aws_ec2 - use a generator rather than list comprehension (https://github.com/ansible-collections/amazon.aws/pull/465).
+ - aws_s3 - Tests for compatability with older versions of the AWS SDKs have
+ been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - aws_s3 - Tests for compatability with older versions of the AWS SDKs have
+ been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - aws_s3 - add ``tags`` and ``purge_tags`` features for an S3 object (https://github.com/ansible-collections/amazon.aws/pull/335)
+ - aws_s3 - new mode to copy existing on another bucket (https://github.com/ansible-collections/amazon.aws/pull/359).
+ - aws_secret - added support for gracefully handling deleted secrets (https://github.com/ansible-collections/amazon.aws/pull/455).
+ - aws_ssm - add "on_missing" and "on_denied" option (https://github.com/ansible-collections/amazon.aws/pull/370).
+ - cloudformation - Tests for compatability with older versions of the AWS SDKs
+ have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - cloudformation - Tests for compatability with older versions of the AWS SDKs
+ have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_ami - ensure tags are propagated to the snapshot(s) when creating an AMI
+ (https://github.com/ansible-collections/amazon.aws/pull/437).
+ - ec2_eni - fix idempotency when ``security_groups`` attribute is specified
+ (https://github.com/ansible-collections/amazon.aws/pull/337).
+ - ec2_eni - timeout increased when waiting for ENIs to finish detaching (https://github.com/ansible-collections/amazon.aws/pull/501).
+ - ec2_group - Tests for compatability with older versions of the AWS SDKs have
+ been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_group - Tests for compatability with older versions of the AWS SDKs have
+ been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_group - use a generator rather than list comprehension (https://github.com/ansible-collections/amazon.aws/pull/465).
+ - ec2_group - use system ipaddress module, available with Python >= 3.3, instead
+ of vendored copy (https://github.com/ansible-collections/amazon.aws/pull/461).
+ - ec2_instance - Tests for compatability with older versions of the AWS SDKs
+ have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_instance - Tests for compatability with older versions of the AWS SDKs
+ have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_instance - add ``throughput`` parameter for gp3 volume types (https://github.com/ansible-collections/amazon.aws/pull/433).
+ - ec2_instance - add support for controlling metadata options (https://github.com/ansible-collections/amazon.aws/pull/414).
- ec2_instance - remove unnecessary raise when exiting with a failure (https://github.com/ansible-collections/amazon.aws/pull/460).
+ - ec2_instance_info - Tests for compatability with older versions of the AWS
+ SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_instance_info - Tests for compatability with older versions of the AWS
+ SDKs have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_snapshot - migrated to use the boto3 python library (https://github.com/ansible-collections/amazon.aws/pull/356).
+ - ec2_spot_instance_info - Added a new module that describes the specified Spot
+ Instance requests (https://github.com/ansible-collections/amazon.aws/pull/487).
+ - ec2_vol - add parameter ``multi_attach`` to support Multi-Attach on volume
+ creation/update (https://github.com/ansible-collections/amazon.aws/pull/362).
+ - ec2_vol - relax the boto3/botocore requirements and only require botocore
+ 1.19.27 for modifying the ``throughput`` parameter (https://github.com/ansible-collections/amazon.aws/pull/346).
+ - ec2_vpc_dhcp_option - Now also returns a boto3-style resource description
+ in the ``dhcp_options`` result key. This includes any tags for the ``dhcp_options_id``
+ and has the same format as the current return value of ``ec2_vpc_dhcp_option_info``.
+ (https://github.com/ansible-collections/amazon.aws/pull/252)
+ - ec2_vpc_dhcp_option_info - Now also returns a user-friendly ``dhcp_config``
+ key that matches the historical ``new_config`` key from ec2_vpc_dhcp_option,
+ and alleviates the need to use ``items2dict(key_name='key', value_name='values')``
+ when parsing the output of the module. (https://github.com/ansible-collections/amazon.aws/pull/252)
+ - ec2_vpc_subnet - Tests for compatability with older versions of the AWS SDKs
+ have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - ec2_vpc_subnet - Tests for compatability with older versions of the AWS SDKs
+ have been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - integration tests - remove dependency with collection ``community.general``
+ (https://github.com/ansible-collections/amazon.aws/pull/361).
+ - module_utils/waiter - add RDS cluster ``cluster_available`` waiter (https://github.com/ansible-collections/amazon.aws/pull/464).
+ - module_utils/waiter - add RDS cluster ``cluster_deleted`` waiter (https://github.com/ansible-collections/amazon.aws/pull/464).
+ - module_utils/waiter - add Route53 ``resource_record_sets_changed`` waiter
+ (https://github.com/ansible-collections/amazon.aws/pull/350).
+ - s3_bucket - Tests for compatability with older versions of the AWS SDKs have
+ been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - s3_bucket - Tests for compatability with older versions of the AWS SDKs have
+ been removed (https://github.com/ansible-collections/amazon.aws/pull/442).
+ - s3_bucket - add new option ``object_ownership`` to configure object ownership
+ (https://github.com/ansible-collections/amazon.aws/pull/311)
+ - s3_bucket - updated to use HeadBucket instead of ListBucket when testing for
+ bucket existence (https://github.com/ansible-collections/amazon.aws/pull/357).
fragments:
+ - 252_boto3_refactor_ec2_vpc_dhcp_option.yaml
+ - 290-lint-cleanup.yml
+ - 297-scrub_none_parameters-descend-default.yml
+ - 298-python3.6.yml
+ - 311-s3_bucket-allow-object-ownership-configuration.yaml
+ - 318-s3-upload-acl.yml
- 334-ec2_vol-iops-and-throughput-issues.yaml
+ - 335-aws_s3-tagging-object-feature.yaml
+ - 337-ec2_eni-fix-idempotency-security-groups.yml
+ - 346-ec2_vol-boto3-requirements.yml
+ - 350-route53-waiter.yml
+ - 356-ec2_snapshot-boto3-migration.yml
+ - 357-s3_bucket-use-head.yml
+ - 359-aws_s3-add-copy-mode.yml
+ - 361-drop-community.general-support-for-integration.tests.yml
+ - 362-ec2_vol-add-multi-attach-parameter.yml
+ - 370-aws_ssm-add-on_missing-and-on_denied-option.yml
+ - 383_ec2_snapshot_tags.yml
+ - 386_ec2_ami_no_device.yml
+ - 391-s3_bucket-enc_notimplemented.yml
+ - 414-ec2_instance-support-controlling-metadata-options.yml
+ - 424-deprecate-ec2.yml
+ - 433-ec2_instance-throughput.yml
+ - 437-ec2_ami-propagate-tags-to-snapshot.yml
+ - 442-boto3-minimums.yml
+ - 442-boto3-minimums.yml
+ - 455-lookup_aws_secret-deleted.yml
- 460-pylint.yml
- - 486-ec2_vol_fixed_returned_changed_var.yml
- release_date: '2021-09-09'
+ - 461-ipaddress.yml
+ - 464-rds_cluster-waiter.yml
+ - 465-pylint.yml
+ - 481-ec2_instance-wait_sanity.yml
+ - 483-ec2_vol_fix_returned_changed_var.yml
+ - 487-ec2_spot_instance_info-add-new-module.yml
+ - 501-ec2_eni-timeout.yml
+ - include_filters_with_filter.yaml
+ - migrate_ec2_instance.yml
+ - migrate_ec2_vpc_endpoint.yml
+ - migrate_ec2_vpc_igw.yml
+ - migrate_ec2_vpc_nat_gateway.yml
+ - migrate_ec2_vpc_route_table.yml
+ modules:
+ - description: request, stop, reboot or cancel spot instance
+ name: ec2_spot_instance
+ namespace: ''
+ - description: Gather information about ec2 spot instance requests
+ name: ec2_spot_instance_info
+ namespace: ''
+ release_date: '2021-09-03'
+ 2.1.0:
+ changes:
+ bugfixes:
+ - AWS action group - added missing ``ec2_instance_facts`` entry (https://github.com/ansible-collections/amazon.aws/issues/557)
+ - ec2_ami - fix problem when creating an AMI from an instance with ephemeral
+ volumes (https://github.com/ansible-collections/amazon.aws/issues/511).
+ - ec2_instance - ensure that ec2_instance falls back to the tag(Name) parameter
+ when no filter and no name parameter is passed (https://github.com/ansible-collections/amazon.aws/issues/526).
+ - s3_bucket - update error handling to better support DigitalOcean Space (https://github.com/ansible-collections/amazon.aws/issues/508).
+ deprecated_features:
+ - ec2_classic_lb - setting of the ``ec2_elb`` fact has been deprecated and will
+ be removed in release 4.0.0 of the collection. The module now returns ``elb``
+ which can be accessed using the register keyword (https://github.com/ansible-collections/amazon.aws/pull/552).
+ minor_changes:
+ - aws_service_ip_ranges - add new option ``ipv6_prefixes`` to get only IPV6
+ addresses and prefixes for Amazon services (https://github.com/ansible-collections/amazon.aws/pull/430)
+ - cloudformation - fix detection when there are no changes. Sometimes when there
+ are no changes, the change set will have a status FAILED with StatusReason
+ No updates are to be performed (https://github.com/ansible-collections/amazon.aws/pull/507).
+ - ec2_ami - add check_mode support (https://github.com/ansible-collections/amazon.aws/pull/516).
+ - ec2_ami - use module_util helper for tagging AMIs (https://github.com/ansible-collections/amazon.aws/pull/520).
+ - ec2_ami - when creating an AMI from an instance pass the tagging options at
+ creation time (https://github.com/ansible-collections/amazon.aws/pull/551).
+ - ec2_elb_lb - module renamed to ``elb_classic_lb`` (https://github.com/ansible-collections/amazon.aws/pull/377).
+ - ec2_eni - add check mode support (https://github.com/ansible-collections/amazon.aws/pull/534).
+ - ec2_eni - use module_util helper for tagging ENIs (https://github.com/ansible-collections/amazon.aws/pull/522).
+ - ec2_instance - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/527).
+ - ec2_key - add support for tagging key pairs (https://github.com/ansible-collections/amazon.aws/pull/548).
+ - ec2_snapshot - add check_mode support (https://github.com/ansible-collections/amazon.aws/pull/512).
+ - ec2_vol - add check_mode support (https://github.com/ansible-collections/amazon.aws/pull/509).
+ - ec2_vpc_dhcp_option - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+ - ec2_vpc_endpoint - added ``vpc_endpoint_security_groups`` parameter to support
+ defining the security group attached to an interface endpoint (https://github.com/ansible-collections/amazon.aws/pull/544).
+ - ec2_vpc_endpoint - added ``vpc_endpoint_subnets`` parameter to support defining
+ the subnet attached to an interface or gateway endpoint (https://github.com/ansible-collections/amazon.aws/pull/544).
+ - ec2_vpc_endpoint - use module_util helper for tagging (https://github.com/ansible-collections/amazon.aws/pull/525).
+ - ec2_vpc_endpoint - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+ - ec2_vpc_igw - use module_util helper for tagging (https://github.com/ansible-collections/amazon.aws/pull/523).
+ - ec2_vpc_igw - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+ - ec2_vpc_nat_gateway - use module_util helper for tagging (https://github.com/ansible-collections/amazon.aws/pull/524).
+ - ec2_vpc_nat_gateway - use module_util helpers for tagging (https://github.com/ansible-collections/amazon.aws/pull/531).
+ - elb_classic_lb - added retries on common AWS temporary API failures (https://github.com/ansible-collections/amazon.aws/pull/377).
+ - elb_classic_lb - added support for check_mode (https://github.com/ansible-collections/amazon.aws/pull/377).
+ - elb_classic_lb - added support for wait during creation (https://github.com/ansible-collections/amazon.aws/pull/377).
+ - elb_classic_lb - added support for wait during instance addition and removal
+ (https://github.com/ansible-collections/amazon.aws/pull/377).
+ - elb_classic_lb - migrated to boto3 SDK (https://github.com/ansible-collections/amazon.aws/pull/377).
+ - elb_classic_lb - various error messages changed due to refactor (https://github.com/ansible-collections/amazon.aws/pull/377).
+ - module_utils.ec2 - moved generic tagging helpers into module_utils.tagging
+ (https://github.com/ansible-collections/amazon.aws/pull/527).
+ - module_utils.tagging - add new helper to generate TagSpecification lists (https://github.com/ansible-collections/amazon.aws/pull/527).
+ fragments:
+ - 377-ec2_elb_lb-boto3.yml
+ - 430-add_support_for_ipv6_addresses.yml
+ - 507-fix_cloudformation_changeset_detection.yml
+ - 508-s3_bucket-digital_ocean.yml
+ - 509-ec2_vol_add_check_mode_support.yml
+ - 512-ec2_snapshot_add_check_mode_support.yml.yml
+ - 516-ec2_ami_add_check_mode_support.yml
+ - 520-ec2_ami-tagging.yml
+ - 522-ec2_eni-tagging.yml
+ - 523-ec2_vpc_igw-tagging.yml
+ - 524-ec2_vpc_nat_gateway-tagging.yml
+ - 525-ec2_vpc_endpoint-tagging.yml
+ - 526-ec2_instance_search_tags.yml
+ - 527-ec2_instance-tagging.yml
+ - 531-use_tags_handlers.yml
+ - 534-ec2_eni_add_check_mode_support.yml
+ - 544-vpc-endpoint-add-subnets-sg-option.yml
+ - 548-ec2_key-tagging.yml
+ - 551-ec2_ami-tag-on-create.yml
+ - 552-elb_classic_lb-fact.yml
+ - 557-action_group-missing-entry.yml
+ release_date: '2021-11-11'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/CONTRIBUTING.md ansible-5.2.0/ansible_collections/amazon/aws/CONTRIBUTING.md
--- ansible-4.10.0/ansible_collections/amazon/aws/CONTRIBUTING.md 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/CONTRIBUTING.md 2021-11-12 18:13:53.000000000 +0000
@@ -65,7 +65,7 @@
For general information on running the integration tests see the
[Integration Tests page of the Module Development Guide](https://docs.ansible.com/ansible/devel/dev_guide/testing_integration.html#testing-integration),
especially the section on configuration for cloud tests. For questions about writing tests the Ansible AWS community can
-be found on Freenode IRC as detailed below.
+be found on Libera.Chat IRC as detailed below.
### Code of Conduct
@@ -75,7 +75,7 @@
### IRC
Our IRC channels may require you to register your nickname. If you receive an error when you connect, see
-[Freenode's Nickname Registration guide](https://freenode.net/kb/answer/registration) for instructions
+[Libera.Chat's Nickname Registration guide](https://libera.chat/guides/registration) for instructions.
-The `#ansible-aws` channel on Freenode irc is the main and official place to discuss use and development
+The `#ansible-aws` channel on [irc.libera.chat](https://libera.chat/) is the main and official place to discuss use and development
of the `amazon.aws` collection.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_account_attribute_lookup.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_account_attribute_lookup.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_account_attribute_lookup.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_account_attribute_lookup.rst 2021-11-12 18:13:53.000000000 +0000
@@ -24,8 +24,9 @@
------------
The below requirements are needed on the local Ansible controller node that executes this lookup.
+- python >= 3.6
- boto3
-- botocore
+- botocore >= 1.18.0
Parameters
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_az_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_az_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_az_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_az_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,10 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -55,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -74,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -107,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -144,7 +143,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -181,7 +180,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -215,7 +213,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -237,7 +235,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -249,8 +247,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_caller_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_caller_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_caller_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_caller_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -27,10 +27,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -56,7 +55,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -75,7 +74,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -108,7 +107,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -145,7 +144,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -161,7 +160,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -195,7 +193,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -217,7 +215,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -229,8 +227,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ec2_inventory.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ec2_inventory.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ec2_inventory.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ec2_inventory.rst 2021-11-12 18:13:53.000000000 +0000
@@ -36,13 +36,13 @@
This is not the default as such names break certain functionality as not all characters are valid Python identifiers which group names end up being used as.
+
+
+
+ use_extra_vars
+
+
+ boolean
+
+
added in 2.11
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
ini entries:
+
[inventory_plugins] use_extra_vars = no
+
+
env:ANSIBLE_INVENTORY_USE_EXTRA_VARS
+
+
+
Merge extra vars into the available variables for composition (highest precedence).
+
+
@@ -650,6 +821,23 @@
- tag:Name:
- 'my_first_tag'
+ # Example using groups to assign the running hosts to a group based on vpc_id
+ plugin: aws_ec2
+ boto_profile: aws_profile
+ # Populate inventory with instances in these regions
+ regions:
+ - us-east-2
+ filters:
+ # All instances with their state as `running`
+ instance-state-name: running
+ keyed_groups:
+ - prefix: tag
+ key: tags
+ compose:
+ ansible_host: public_dns_name
+ groups:
+ libvpc: vpc_id == 'vpc-####'
+
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_rds_inventory.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_rds_inventory.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_rds_inventory.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_rds_inventory.rst 2021-11-12 18:13:53.000000000 +0000
@@ -36,13 +36,13 @@
Add hosts to group based on the values of a variable.
+
+
+
+
+ default_value
+
+
+ string
+
+
added in 2.12
+
+
+
+
+
+
+
The default value when the host variable's value is an empty string.
+
This option is mutually exclusive with trailing_separator.
+
+
+
+ key
+
+
+ string
+
+
+
+
+
+
+
+
The key from input dictionary used to generate groups
+
+
+
+
+
+
+ parent_group
+
+
+ string
+
+
+
+
+
+
+
+
parent group for keyed group
+
+
+
+
+
+
+ prefix
+
+
+ string
+
+
+
+ Default:
""
+
+
+
+
+
A keyed group name will start with this prefix
+
+
+
+
+
+
+ separator
+
+
+ string
+
+
+
+ Default:
"_"
+
+
+
+
+
separator used to build the keyed group name
+
+
+
+
+
+
+ trailing_separator
+
+
+ boolean
+
+
added in 2.12
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
+
+
Set this option to False to omit the separator after the host variable when the value is an empty string.
+
This option is mutually exclusive with default_value.
+
+
+
+
+
+
+ leading_separator
+
+
+ boolean
+
+
added in 2.11
+
+
+ Default:
"yes"
+
+
+
+
+
Use in conjunction with keyed_groups.
+
By default, a keyed group that does not have a prefix or a separator provided will have a name that starts with an underscore.
+
This is because the default prefix is "" and the default separator is "_".
+
Set this option to False to omit the leading underscore (or other separator) if no prefix is given.
+
If the group name is derived from a mapping the separator is still used to concatenate the items.
+
To not use a separator in the group name at all, set the separator for the keyed group to an empty string instead.
+
+
+
+
+
regions
@@ -372,7 +517,7 @@
-
+
statuses
@@ -390,7 +535,7 @@
-
+
strict
@@ -412,7 +557,7 @@
-
+
strict_permissions
@@ -432,6 +577,32 @@
By default if an AccessDenied exception is encountered this plugin will fail. You can set strict_permissions to False in the inventory config file which will allow the restrictions to be gracefully skipped.
+
+
+
+ use_extra_vars
+
+
+ boolean
+
+
added in 2.11
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
ini entries:
+
[inventory_plugins] use_extra_vars = no
+
+
env:ANSIBLE_INVENTORY_USE_EXTRA_VARS
+
+
+
Merge extra vars into the available variables for composition (highest precedence).
+
+
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_s3_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_s3_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_s3_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_s3_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -17,7 +17,7 @@
Synopsis
--------
-- This module allows the user to manage S3 buckets and the objects within them. Includes support for creating and deleting both objects and buckets, retrieving objects as files or strings and generating download links. This module has a dependency on boto3 and botocore.
+- This module allows the user to manage S3 buckets and the objects within them. Includes support for creating and deleting both objects and buckets, retrieving objects as files or strings, generating download links and copy of an object that is already stored in Amazon S3.
@@ -25,10 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -38,12 +37,12 @@
-
Parameter
+
Parameter
Choices/Defaults
Comments
-
+
aws_access_key
@@ -54,14 +53,14 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
-
+
aws_ca_bundle
@@ -73,12 +72,12 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
-
+
aws_config
@@ -95,7 +94,7 @@
-
+
aws_secret_key
@@ -106,14 +105,14 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
-
+
bucket
@@ -129,7 +128,7 @@
-
+
content
@@ -141,13 +140,13 @@
-
The content to PUT into an object.
+
The content to PUT into an object.
The parameter value will be treated as a string and converted to UTF-8 before sending it to S3. To send binary data, use the content_base64 parameter instead.
-
Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise.
+
Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise.
-
+
content_base64
@@ -159,14 +158,82 @@
-
The base64-encoded binary data to PUT into an object.
+
The base64-encoded binary data to PUT into an object.
Use this if you need to put raw binary data, and don't forget to encode in base64.
-
Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise.
+
Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise.
+
+
+
+
+
+ copy_src
+
+
+ dictionary
+
+
added in 2.0.0
+
+
+
+
+
The source details of the object to copy.
+
Required if mode is copy.
+
+
+
+
+
+
+ bucket
+
+
+ string
+ / required
+
+
+
+
+
+
The name of the source bucket.
+
+
+
+
+
+
+ object
+
+
+ string
+ / required
+
+
+
+
+
+
key name of the source object.
+
+ version_id
+
+
+ string
+
+
+
+
+
+
version ID of the source object.
+
+
+
+
+
+
debug_botocore_endpoint_logs
@@ -184,7 +251,7 @@
-
+
dest
@@ -195,11 +262,11 @@
-
The destination file path when downloading an object/key with a GET operation.
+
The destination file path when downloading an object/key with a GET operation.
-
+
dualstack
@@ -215,11 +282,10 @@
Enables Amazon S3 Dual-Stack Endpoints, allowing S3 communications using both IPv4 and IPv6.
-
Requires at least botocore version 1.4.45.
-
+
ec2_url
@@ -230,12 +296,12 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
-
+
encrypt
@@ -250,11 +316,11 @@
-
When set for PUT mode, asks for server-side encryption.
+
When set for PUT/COPY mode, asks for server-side encryption.
-
+
encryption_kms_key_id
@@ -269,7 +335,7 @@
-
+
encryption_mode
@@ -288,7 +354,7 @@
-
+
expiry
@@ -305,7 +371,7 @@
-
+
headers
@@ -316,11 +382,11 @@
-
Custom headers for PUT operation, as a dictionary of key=value and key=value,key=value.
+
Custom headers for PUT operation, as a dictionary of key=value and key=value,key=value.
-
+
ignore_nonexistent_bucket
@@ -335,11 +401,11 @@
-
Overrides initial bucket lookups in case bucket or iam policies are restrictive. Example: a user may have the GetObject permission but no other permissions. In this case using the option mode: get will fail without specifying ignore_nonexistent_bucket=true.
+
Overrides initial bucket lookups in case bucket or iam policies are restrictive. Example: a user may have the GetObject permission but no other permissions. In this case using the option mode: get will fail without specifying ignore_nonexistent_bucket=true.
-
+
marker
@@ -354,7 +420,7 @@
-
+
max_keys
@@ -370,7 +436,7 @@
-
+
metadata
@@ -381,11 +447,11 @@
-
Metadata for PUT operation, as a dictionary of key=value and key=value,key=value.
+
Metadata for PUT/COPY operation, as a dictionary of key=value and key=value,key=value.
-
+
mode
@@ -404,14 +470,24 @@
getstr
delobj
list
+
copy
-
Switches the module behaviour between put (upload), get (download), geturl (return download url, Ansible 1.3+), getstr (download object as string (1.3+)), list (list keys, Ansible 2.0+), create (bucket), delete (bucket), and delobj (delete object, Ansible 2.0+).
+
Switches the module behaviour between
+
PUT: upload
+
GET: download
+
geturl: return download URL
+
getstr: download object as string
+
list: list keys
+
create: create bucket
+
delete: delete bucket
+
delobj: delete object
+
copy: copy object that is already stored in another bucket
-
+
object
@@ -426,7 +502,7 @@
-
+
overwrite
@@ -438,7 +514,7 @@
Default:
"always"
-
Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations.
+
Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations.
Must be a Boolean, always, never or different.
true is the same as always.
false is equal to never.
@@ -447,7 +523,7 @@
-
+
permission
@@ -460,11 +536,11 @@
Default:
["private"]
-
This option lets the user set the canned permissions on the object/bucket that are created. The permissions that can be set are private, public-read, public-read-write, authenticated-read for a bucket or private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, bucket-owner-full-control for an object. Multiple permissions can be specified as a list.
+
This option lets the user set the canned permissions on the object/bucket that are created. The permissions that can be set are private, public-read, public-read-write, authenticated-read for a bucket or private, public-read, public-read-write, aws-exec-read, authenticated-read, bucket-owner-read, bucket-owner-full-control for an object. Multiple permissions can be specified as a list; although only the first one will be used during the initial upload of the file
-
+
prefix
@@ -480,7 +556,7 @@
-
+
profile
@@ -491,14 +567,34 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
-
+
+
+ purge_tags
+
+
+ boolean
+
+
added in 2.0.0
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
Whether or not to remove tags assigned to the S3 object if not specified in the playbook.
+
To remove all tags set tags to an empty dictionary in conjunction with this.
+
+
+
+
region
@@ -514,7 +610,7 @@
-
+
retries
@@ -531,7 +627,7 @@
-
+
rgw
@@ -550,7 +646,7 @@
-
+
s3_url
@@ -566,7 +662,7 @@
-
+
security_token
@@ -577,14 +673,14 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
-
+
src
@@ -595,12 +691,28 @@
-
The source file path when performing a PUT operation.
-
Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise.
+
The source file path when performing a PUT operation.
+
Either content, content_base64 or src must be specified for a PUT operation. Ignored otherwise.
-
+
+
+ tags
+
+
+ dictionary
+
+
added in 2.0.0
+
+
+
+
+
Tags dict to apply to the S3 object.
+
+
+
+
validate_certs
@@ -615,11 +727,11 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
-
+
version
@@ -642,8 +754,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -751,6 +864,15 @@
object: /my/desired/key.txt
mode: delobj
+ - name: Copy an object already stored in another bucket
+ amazon.aws.aws_s3:
+ bucket: mybucket
+ object: /my/desired/key.txt
+ mode: copy
+ copy_src:
+ bucket: srcbucket
+ object: /source/key.txt
+
Return Values
@@ -864,3 +986,4 @@
- Lester Wade (@lwade)
- Sloane Hertel (@s-hertel)
+- Alina Buzachis (@linabuzachis)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_secret_lookup.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_secret_lookup.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_secret_lookup.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_secret_lookup.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,8 +26,9 @@
------------
The below requirements are needed on the local Ansible controller node that executes this lookup.
+- python >= 3.6
- boto3
-- botocore>=1.10.0
+- botocore >= 1.18.0
Parameters
@@ -203,6 +204,32 @@
+ on_deleted
+
+
+ string
+
+
added in 2.0.0
+
+
+
Choices:
+
error ←
+
skip
+
warn
+
+
+
+
+
+
Action to take if the secret has been marked for deletion.
+
error will raise a fatal error when the secret has been marked for deletion.
+
skip will silently ignore the deleted secret.
+
warn will skip over the deleted secret but issue a warning.
+
+
+
+
+
on_denied
@@ -339,6 +366,14 @@
debug: msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true) }}"
# The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
# If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
+ - name: lookup secretsmanager secret in a specific region using specified region and aws profile using nested feature
+ debug: >
+ msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', region=region, aws_profile=aws_profile,
+ aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, nested=true) }}"
+ # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
+ # If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
+ # Region is the AWS region where the AWS secret is stored.
+ # AWS_profile is the aws profile to use, that has access to the AWS secret.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_service_ip_ranges_lookup.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_service_ip_ranges_lookup.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_service_ip_ranges_lookup.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_service_ip_ranges_lookup.rst 2021-11-12 18:13:53.000000000 +0000
@@ -43,6 +43,24 @@
+ ipv6_prefixes
+
+
+ -
+
+
added in 2.1.0
+
+
+
+
+
+
+
When ipv6_prefixes=True the lookup will return ipv6 addresses instead of ipv4 addresses
+
+
+
+
+
region
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ssm_lookup.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ssm_lookup.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ssm_lookup.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.aws_ssm_lookup.rst 2021-11-12 18:13:53.000000000 +0000
@@ -27,8 +27,9 @@
------------
The below requirements are needed on the local Ansible controller node that executes this lookup.
+- python >= 3.6
- boto3
-- botocore
+- botocore >= 1.18.0
Parameters
@@ -82,6 +83,58 @@
+ on_denied
+
+
+ string
+
+
added in 2.0.0
+
+
+
Choices:
+
error ←
+
skip
+
warn
+
+
+
+
+
+
Action to take if access to the SSM parameter is denied.
+
error will raise a fatal error when access to the SSM parameter is denied.
+
skip will silently ignore the denied SSM parameter.
+
warn will skip over the denied SSM parameter but issue a warning.
+
+
+
+
+
+ on_missing
+
+
+ string
+
+
added in 2.0.0
+
+
+
Choices:
+
error ←
+
skip
+
warn
+
+
+
+
+
+
Action to take if the SSM parameter is missing.
+
error will raise a fatal error when the SSM parameter is missing.
+
skip will silently ignore the missing SSM parameter.
+
warn will skip over the missing SSM parameter but issue a warning.
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -92,7 +92,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -125,7 +125,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -162,7 +162,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -178,7 +178,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -212,7 +211,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -344,7 +343,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -356,8 +355,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.cloudformation_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.cloudformation_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.cloudformation_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.cloudformation_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,10 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore>=1.5.45
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +53,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +72,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +105,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -281,7 +280,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -348,7 +347,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -397,7 +395,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -604,7 +602,7 @@
-
Enable or disable termination protection on the stack. Only works with botocore >= 1.7.18.
+
Enable or disable termination protection on the stack.
@@ -623,7 +621,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -636,8 +634,9 @@
.. note::
- CloudFormation features change often, and this module tries to keep up. That means your botocore version should be fresh. The version listed in the requirements is the oldest version that works with the module as a whole. Some features may require recent versions, and we do not pinpoint a minimum version for each feature. Instead of relying on the minimum version, keep botocore up to date. AWS is always releasing features and fixing bugs.
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,9 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -162,7 +162,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -246,7 +246,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -280,7 +279,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -302,7 +301,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -314,8 +313,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_ami_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,8 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- python >= 2.6
-- boto
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -68,7 +69,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -87,7 +88,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -120,7 +121,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -388,7 +389,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -533,7 +534,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -616,7 +616,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -687,7 +687,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -750,8 +750,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_elb_lb_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_elb_lb_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_elb_lb_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_elb_lb_module.rst 1970-01-01 00:00:00.000000000 +0000
@@ -1,829 +0,0 @@
-.. _amazon.aws.ec2_elb_lb_module:
-
-
-*********************
-amazon.aws.ec2_elb_lb
-*********************
-
-**Creates, updates or destroys an Amazon ELB.**
-
-
-Version added: 1.0.0
-
-.. contents::
- :local:
- :depth: 1
-
-
-Synopsis
---------
-- Returns information about the load balancer.
-- Will be marked changed when called only if state is changed.
-
-
-
-Requirements
-------------
-The below requirements are needed on the host that executes this module.
-
-- python >= 2.6
-- boto
-
-
-Parameters
-----------
-
-.. raw:: html
-
-
-
-
Parameter
-
Choices/Defaults
-
Comments
-
-
-
-
- access_logs
-
-
- dictionary
-
-
-
-
-
-
An associative array of access logs configuration settings (see examples).
-
-
-
-
-
- aws_access_key
-
-
- string
-
-
-
-
-
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
-
If profile is set this parameter is ignored.
-
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
-
aliases: ec2_access_key, access_key
-
-
-
-
-
- aws_ca_bundle
-
-
- path
-
-
-
-
-
-
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
-
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
-
-
-
-
-
- aws_config
-
-
- dictionary
-
-
-
-
-
-
A dictionary to modify the botocore configuration.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
-
If profile is set this parameter is ignored.
-
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
-
aliases: ec2_secret_key, secret_key
-
-
-
-
-
- connection_draining_timeout
-
-
- integer
-
-
-
-
-
-
Wait a specified timeout allowing connections to drain before terminating an instance.
-
-
-
-
-
- cross_az_load_balancing
-
-
- boolean
-
-
-
-
Choices:
-
no
-
yes
-
-
-
-
Distribute load across all configured Availability Zones.
-
Defaults to false.
-
-
-
-
-
- debug_botocore_endpoint_logs
-
-
- boolean
-
-
-
-
Choices:
-
no ←
-
yes
-
-
-
-
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
-
-
-
-
-
- ec2_url
-
-
- string
-
-
-
-
-
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
-
aliases: aws_endpoint_url, endpoint_url
-
-
-
-
-
- health_check
-
-
- dictionary
-
-
-
-
-
-
An associative array of health check configuration settings (see examples).
-
-
-
-
-
- idle_timeout
-
-
- integer
-
-
-
-
-
-
ELB connections from clients and to servers are timed out after this amount of time.
-
-
-
-
-
- instance_ids
-
-
- list
- / elements=string
-
-
-
-
-
-
List of instance ids to attach to this ELB.
-
-
-
-
-
- listeners
-
-
- list
- / elements=dictionary
-
-
-
-
-
-
List of ports/protocols for this ELB to listen on (see examples).
-
-
-
-
-
- name
-
-
- string
- / required
-
-
-
-
-
-
The name of the ELB.
-
-
-
-
-
- profile
-
-
- string
-
-
-
-
-
-
Uses a boto profile. Only works with boto >= 2.24.0.
-
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
-
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
-
aliases: aws_profile
-
-
-
-
-
- purge_instance_ids
-
-
- boolean
-
-
-
-
Choices:
-
no ←
-
yes
-
-
-
-
Purge existing instance ids on ELB that are not found in instance_ids.
-
-
-
-
-
- purge_listeners
-
-
- boolean
-
-
-
-
Choices:
-
no
-
yes ←
-
-
-
-
Purge existing listeners on ELB that are not found in listeners.
-
-
-
-
-
- purge_subnets
-
-
- boolean
-
-
-
-
Choices:
-
no ←
-
yes
-
-
-
-
Purge existing subnet on ELB that are not found in subnets.
-
-
-
-
-
- purge_zones
-
-
- boolean
-
-
-
-
Choices:
-
no ←
-
yes
-
-
-
-
Purge existing availability zones on ELB that are not found in zones.
The scheme to use when creating the ELB. For a private VPC-visible ELB use internal.
-
If you choose to update your scheme with a different value the ELB will be destroyed and recreated. To update scheme you must use the option wait.
-
-
-
-
-
- security_group_ids
-
-
- list
- / elements=string
-
-
-
-
-
-
A list of security groups to apply to the ELB.
-
-
-
-
-
- security_group_names
-
-
- list
- / elements=string
-
-
-
-
-
-
A list of security group names to apply to the ELB.
-
-
-
-
-
- security_token
-
-
- string
-
-
-
-
-
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
-
If profile is set this parameter is ignored.
-
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
-
aliases: aws_security_token, access_token
-
-
-
-
-
- state
-
-
- string
- / required
-
-
-
-
Choices:
-
absent
-
present
-
-
-
-
Create or destroy the ELB.
-
-
-
-
-
- stickiness
-
-
- dictionary
-
-
-
-
-
-
An associative array of stickiness policy settings. Policy will be applied to all listeners (see examples).
-
-
-
-
-
- subnets
-
-
- list
- / elements=string
-
-
-
-
-
-
A list of VPC subnets to use when creating ELB. Zones should be empty if using this.
-
-
-
-
-
- tags
-
-
- dictionary
-
-
-
-
-
-
An associative array of tags. To delete all tags, supply an empty dict ({}).
-
-
-
-
-
- validate_certs
-
-
- boolean
-
-
-
-
Choices:
-
no
-
yes ←
-
-
-
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
-
-
-
-
-
- wait
-
-
- boolean
-
-
-
-
Choices:
-
no ←
-
yes
-
-
-
-
When specified, Ansible will check the status of the load balancer to ensure it has been successfully removed from AWS.
-
-
-
-
-
- wait_timeout
-
-
- integer
-
-
-
- Default:
60
-
-
-
Used in conjunction with wait. Number of seconds to wait for the ELB to be terminated.
-
A maximum of 600 seconds (10 minutes) is allowed.
-
-
-
-
-
- zones
-
-
- list
- / elements=string
-
-
-
-
-
-
List of availability zones to enable on this ELB.
-
-
-
-
-
-
-Notes
------
-
-.. note::
- - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
-
-
-
-Examples
---------
-
-.. code-block:: yaml
-
- # Note: None of these examples set aws_access_key, aws_secret_key, or region.
- # It is assumed that their matching environment variables are set.
-
- # Basic provisioning example (non-VPC)
-
- - amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http # options are http, https, ssl, tcp
- load_balancer_port: 80
- instance_port: 80
- proxy_protocol: True
- - protocol: https
- load_balancer_port: 443
- instance_protocol: http # optional, defaults to value of protocol setting
- instance_port: 80
- # ssl certificate required for https or ssl
- ssl_certificate_id: "arn:aws:iam::123456789012:server-certificate/company/servercerts/ProdServerCert"
-
- # Internal ELB example
-
- - amazon.aws.ec2_elb_lb:
- name: "test-vpc"
- scheme: internal
- state: present
- instance_ids:
- - i-abcd1234
- purge_instance_ids: true
- subnets:
- - subnet-abcd1234
- - subnet-1a2b3c4d
- listeners:
- - protocol: http # options are http, https, ssl, tcp
- load_balancer_port: 80
- instance_port: 80
-
- # Configure a health check and the access logs
- - amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- health_check:
- ping_protocol: http # options are http, https, ssl, tcp
- ping_port: 80
- ping_path: "/index.html" # not required for tcp or ssl
- response_timeout: 5 # seconds
- interval: 30 # seconds
- unhealthy_threshold: 2
- healthy_threshold: 10
- access_logs:
- interval: 5 # minutes (defaults to 60)
- s3_location: "my-bucket" # This value is required if access_logs is set
- s3_prefix: "logs"
-
- # Ensure ELB is gone
- - amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: absent
-
- # Ensure ELB is gone and wait for check (for default timeout)
- - amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: absent
- wait: yes
-
- # Ensure ELB is gone and wait for check with timeout value
- - amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: absent
- wait: yes
- wait_timeout: 600
-
- # Normally, this module will purge any listeners that exist on the ELB
- # but aren't specified in the listeners parameter. If purge_listeners is
- # false it leaves them alone
- - amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- purge_listeners: no
-
- # Normally, this module will leave availability zones that are enabled
- # on the ELB alone. If purge_zones is true, then any extraneous zones
- # will be removed
- - amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- purge_zones: yes
-
- # Creates a ELB and assigns a list of subnets to it.
- - amazon.aws.ec2_elb_lb:
- state: present
- name: 'New ELB'
- security_group_ids: 'sg-123456, sg-67890'
- region: us-west-2
- subnets: 'subnet-123456,subnet-67890'
- purge_subnets: yes
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
-
- # Create an ELB with connection draining, increased idle timeout and cross availability
- # zone load balancing
- - amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- connection_draining_timeout: 60
- idle_timeout: 300
- cross_az_load_balancing: "yes"
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
-
- # Create an ELB with load balancer stickiness enabled
- - amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- stickiness:
- type: loadbalancer
- enabled: yes
- expiration: 300
-
- # Create an ELB with application stickiness enabled
- - amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- stickiness:
- type: application
- enabled: yes
- cookie: SESSIONID
-
- # Create an ELB and add tags
- - amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- tags:
- Name: "New ELB"
- stack: "production"
- client: "Bob"
-
- # Delete all tags from an ELB
- - amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- tags: {}
-
-
-
-
-Status
-------
-
-
-Authors
-~~~~~~~
-
-- Jim Dalton (@jsdalton)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,9 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -143,7 +143,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -192,7 +192,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -226,7 +225,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -248,7 +247,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -260,8 +259,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -478,7 +478,8 @@
string
-
added in 1.3.0
+
added in 1.3.0
+
When a Name tag has been set
The Name tag of the ENI, often displayed in the AWS UIs as Name
@@ -684,7 +685,8 @@
dictionary
-
added in 1.3.0
+
added in 1.3.0
+
always
Dictionary of tags added to the ENI
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_eni_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,8 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- python >= 2.6
-- boto
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -90,7 +91,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -109,7 +110,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -142,7 +143,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -229,7 +230,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -328,7 +329,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -451,7 +451,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -544,7 +544,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -557,8 +557,9 @@
.. note::
- This module identifies and ENI based on either the *eni_id*, a combination of *private_ip_address* and *subnet_id*, or a combination of *instance_id* and *device_id*. Any of these options will let you specify a particular ENI.
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,9 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -143,7 +143,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -175,7 +175,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -209,7 +208,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -231,7 +230,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -244,8 +243,9 @@
.. note::
- By default, the module will return all security groups. To limit results use the appropriate filters.
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_group_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,9 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -53,7 +53,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -72,7 +72,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -105,7 +105,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -157,7 +157,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -206,7 +206,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -350,7 +349,9 @@
-
The start of the range of ports that traffic is coming from. A value of -1 indicates all ports.
+
The start of the range of ports that traffic is coming from.
+
A value can be between 0 to 65535.
+
A value of -1 indicates all ports (only supported when proto=icmp).
@@ -466,7 +467,9 @@
-
The end of the range of ports that traffic is coming from. A value of -1 indicates all ports.
+
The end of the range of ports that traffic is coming from.
+
A value can be between 0 to 65535.
+
A value of -1 indicates all ports (only supported when proto=icmp).
@@ -533,7 +536,9 @@
-
The start of the range of ports that traffic is going to. A value of -1 indicates all ports.
+
The start of the range of ports that traffic is going to.
+
A value can be between 0 to 65535.
+
A value of -1 indicates all ports (only supported when proto=icmp).
@@ -649,7 +654,9 @@
-
The end of the range of ports that traffic is going to. A value of -1 indicates all ports.
+
The end of the range of ports that traffic is going to.
+
A value can be between 0 to 65535.
+
A value of -1 indicates all ports (only supported when proto=icmp).
@@ -665,7 +672,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -722,7 +729,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -751,8 +758,9 @@
- If a rule declares a group_name and that group doesn't exist, it will be automatically created. In that case, group_desc should be provided as well. The module will refuse to create a depended-on group without a description.
- Preview diff mode support is added in version 2.7.
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -818,7 +826,7 @@
# the containing group name may be specified here
group_name: example
- proto: all
- # in the 'proto' attribute, if you specify -1, all, or a protocol number other than tcp, udp, icmp, or 58 (ICMPv6),
+ # in the 'proto' attribute, if you specify -1 (only supported when I(proto=icmp)), all, or a protocol number other than tcp, udp, icmp, or 58 (ICMPv6),
# traffic on all ports is allowed, regardless of any ports you specify
from_port: 10050 # this value is ignored
to_port: 10050 # this value is ignored
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_instance_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_instance_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_instance_info_module.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_instance_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,1886 @@
+.. _amazon.aws.ec2_instance_info_module:
+
+
+****************************
+amazon.aws.ec2_instance_info
+****************************
+
+**Gather information about ec2 instances in AWS**
+
+
+Version added: 1.0.0
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Synopsis
+--------
+- Gather information about ec2 instances in AWS
+- This module was called ``ec2_instance_facts`` before Ansible 2.9. The usage did not change.
+
+
+
+Requirements
+------------
+The below requirements are needed on the host that executes this module.
+
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
+
+
+Parameters
+----------
+
+.. raw:: html
+
+
+
+
Parameter
+
Choices/Defaults
+
Comments
+
+
+
+
+ aws_access_key
+
+
+ string
+
+
+
+
+
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_access_key, access_key
+
+
+
+
+
+ aws_ca_bundle
+
+
+ path
+
+
+
+
+
+
The location of a CA Bundle to use when validating SSL certificates.
+
Not used by boto 2 based modules.
+
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
+
+
+
+
+
+ aws_config
+
+
+ dictionary
+
+
+
+
+
+
A dictionary to modify the botocore configuration.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
If you specify one or more instance IDs, only instances that have the specified IDs are returned.
+
+
+
+
+
+ minimum_uptime
+
+
+ integer
+
+
+
+
+
+
Minimum running uptime in minutes of instances. For example if uptime is 60 return all instances that have run more than 60 minutes.
+
aliases: uptime
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: Gather information about all instances
+ amazon.aws.ec2_instance_info:
+
+ - name: Gather information about all instances in AZ ap-southeast-2a
+ amazon.aws.ec2_instance_info:
+ filters:
+ availability-zone: ap-southeast-2a
+
+ - name: Gather information about a particular instance using ID
+ amazon.aws.ec2_instance_info:
+ instance_ids:
+ - i-12345678
+
+ - name: Gather information about any instance with a tag key Name and value Example
+ amazon.aws.ec2_instance_info:
+ filters:
+ "tag:Name": Example
+
+ - name: Gather information about any instance in states "shutting-down", "stopping", "stopped"
+ amazon.aws.ec2_instance_info:
+ filters:
+ instance-state-name: [ "shutting-down", "stopping", "stopped" ]
+
+ - name: Gather information about any instance with Name beginning with RHEL and an uptime of at least 60 minutes
+ amazon.aws.ec2_instance_info:
+ region: "{{ ec2_region }}"
+ uptime: 60
+ filters:
+ "tag:Name": "RHEL-*"
+ instance-state-name: [ "running"]
+ register: ec2_node_info
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ instances
+
+
+ complex
+
+
+
always
+
+
a list of ec2 instances
+
+
+
+
+
+
+
+ ami_launch_index
+
+
+ integer
+
+
+
always
+
+
The AMI launch index, which can be used to find this instance in the launch group.
+
+
+
+
+
+
+
+ architecture
+
+
+ string
+
+
+
always
+
+
The architecture of the image
+
+
Sample:
+
x86_64
+
+
+
+
+
+
+ block_device_mappings
+
+
+ complex
+
+
+
always
+
+
Any block device mapping entries for the instance.
+
+
+
+
+
+
+
+
+ device_name
+
+
+ string
+
+
+
always
+
+
The device name exposed to the instance (for example, /dev/sdh or xvdh).
+
+
Sample:
+
/dev/sdh
+
+
+
+
+
+
+
+ ebs
+
+
+ complex
+
+
+
always
+
+
Parameters used to automatically set up EBS volumes when the instance is launched.
+
+
+
+
+
+
+
+
+
+ attach_time
+
+
+ string
+
+
+
always
+
+
The time stamp when the attachment initiated.
+
+
Sample:
+
2017-03-23T22:51:24+00:00
+
+
+
+
+
+
+
+
+ delete_on_termination
+
+
+ boolean
+
+
+
always
+
+
Indicates whether the volume is deleted on instance termination.
+
+
Sample:
+
True
+
+
+
+
+
+
+
+
+ status
+
+
+ string
+
+
+
always
+
+
The attachment state.
+
+
Sample:
+
attached
+
+
+
+
+
+
+
+
+ volume_id
+
+
+ string
+
+
+
always
+
+
The ID of the EBS volume
+
+
Sample:
+
vol-12345678
+
+
+
+
+
+
+
+
+ client_token
+
+
+ string
+
+
+
always
+
+
The idempotency token you provided when you launched the instance, if applicable.
+
+
Sample:
+
mytoken
+
+
+
+
+
+
+ cpu_options
+
+
+ complex
+
+
+
always
+
+
The CPU options set for the instance.
+
+
+
+
+
+
+
+
+ core_count
+
+
+ integer
+
+
+
always
+
+
The number of CPU cores for the instance.
+
+
Sample:
+
1
+
+
+
+
+
+
+
+ threads_per_core
+
+
+ integer
+
+
+
always
+
+
The number of threads per CPU core. On supported instance, a value of 1 means Intel Hyper-Threading Technology is disabled.
+
+
Sample:
+
1
+
+
+
+
+
+
+
+ ebs_optimized
+
+
+ boolean
+
+
+
always
+
+
Indicates whether the instance is optimized for EBS I/O.
+
+
+
+
+
+
+
+ hypervisor
+
+
+ string
+
+
+
always
+
+
The hypervisor type of the instance.
+
+
Sample:
+
xen
+
+
+
+
+
+
+ iam_instance_profile
+
+
+ complex
+
+
+
always
+
+
The IAM instance profile associated with the instance, if applicable.
+
+
+
+
+
+
+
+
+ arn
+
+
+ string
+
+
+
always
+
+
The Amazon Resource Name (ARN) of the instance profile.
Select the number of threads per core to enable. Disable or Enable Intel HT.
+
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ detailed_monitoring
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Whether to allow detailed cloudwatch metrics to be collected, enabling more detailed alerting.
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
By default, instances are filtered for counting by their "Name" tag, base AMI, state (running, by default), and subnet ID. Any queryable filter can be used. Good candidates are specific tags, SSH keys, or security groups.
+
+
+
+
+
+ image
+
+
+ dictionary
+
+
+
+
+
+
An image to use for the instance. The amazon.aws.ec2_ami_info module may be used to retrieve images. One of image or image_id are required when instance is not already present.
+
+
+
+
+
+
+ id
+
+
+ string
+
+
+
+
+
+
The AMI ID.
+
+
+
+
+
+
+ kernel
+
+
+ -
+
+
+
+
+
+
a string AKI to override the AMI kernel.
+
+
+
+
+
+
+ ramdisk
+
+
+ string
+
+
+
+
+
+
Overrides the AMI's default ramdisk ID.
+
+
+
+
+
+
+ image_id
+
+
+ string
+
+
+
+
+
+
ami ID to use for the instance. One of image or image_id are required when instance is not already present.
+
This is an alias for image.id.
+
+
+
+
+
+ instance_ids
+
+
+ list
+ / elements=string
+
+
+
+
+
+
If you specify one or more instance IDs, only instances that have the specified IDs are returned.
+
+
+
+
+
+ instance_initiated_shutdown_behavior
+
+
+ string
+
+
+
+
Choices:
+
stop
+
terminate
+
+
+
+
Whether to stop or terminate an instance upon shutdown.
+
+
+
+
+
+ instance_role
+
+
+ string
+
+
+
+
+
+
The ARN or name of an EC2-enabled instance role to be used. If a name is not provided in arn format then the ListInstanceProfiles permission must also be granted. https://docs.aws.amazon.com/IAM/latest/APIReference/API_ListInstanceProfiles.html If no full ARN is provided, the role with a matching name will be used from the active AWS account.
The two suboptions http_endpoint and http_tokens are supported.
+
+
+
+
+
+
+ http_endpoint
+
+
+ string
+
+
+
+
Choices:
+
enabled ←
+
disabled
+
+
+
+
Enables or disables the HTTP metadata endpoint on instances.
+
If specified a value of disabled, metadata of the instance will not be accessible.
+
+
+
+
+
+
+ http_tokens
+
+
+ string
+
+
+
+
Choices:
+
optional ←
+
required
+
+
+
+
Set the state of token usage for instance metadata requests.
+
If the state is optional (v1 and v2), instance metadata can be retrieved with or without a signed token header on request.
+
If the state is required (v2), a signed token header must be sent with any instance metadata retrieval requests.
+
+
+
+
+
+
+ name
+
+
+ string
+
+
+
+
+
+
The Name tag for the instance.
+
+
+
+
+
+ network
+
+
+ dictionary
+
+
+
+
+
+
Either a dictionary containing the key 'interfaces' corresponding to a list of network interface IDs or containing specifications for a single network interface.
+
Use the amazon.aws.ec2_eni module to create ENIs with special settings.
+
+
+
+
+
+
+ assign_public_ip
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
when true assigns a public IP address to the interface
+
+
+
+
+
+
+ delete_on_termination
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Delete the interface when the instance it is attached to is terminated.
+
+
+
+
+
+
+ description
+
+
+ string
+
+
+
+
+
+
a description for the network interface
+
+
+
+
+
+
+ device_index
+
+
+ integer
+
+
+
+
+
+
The index of the interface to modify
+
+
+
+
+
+
+ groups
+
+
+ list
+
+
+
+
+
+
a list of security group IDs to attach to the interface
+
+
+
+
+
+
+ interfaces
+
+
+ list
+
+
+
+
+
+
a list of ENI IDs (strings) or a list of objects containing the key id.
+
+
+
+
+
+
+ ipv6_addresses
+
+
+ list
+
+
+
+
+
+
a list of IPv6 addresses to assign to the network interface
+
+
+
+
+
+
+ private_ip_address
+
+
+ string
+
+
+
+
+
+
an IPv4 address to assign to the interface
+
+
+
+
+
+
+ private_ip_addresses
+
+
+ list
+
+
+
+
+
+
a list of IPv4 addresses to assign to the network interface
+
+
+
+
+
+
+ source_dest_check
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
controls whether source/destination checking is enabled on the interface
+
+
+
+
+
+
+ subnet_id
+
+
+ string
+
+
+
+
+
+
the subnet to connect the network interface to
+
+
+
+
+
+
+ placement_group
+
+
+ string
+
+
+
+
+
+
The placement group that needs to be assigned to the instance
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+
aliases: aws_profile
+
+
+
+
+
+ purge_tags
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Delete any tags not specified in the task that are on the instance. This means you have to specify all the desired tags on each task affecting an instance.
A security group ID or name. Mutually exclusive with security_groups.
+
+
+
+
+
+ security_groups
+
+
+ list
+ / elements=string
+
+
+
+
+
+
A list of security group IDs or names (strings). Mutually exclusive with security_group.
+
+
+
+
+
+ security_token
+
+
+ string
+
+
+
+
+
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
present ←
+
terminated
+
running
+
started
+
stopped
+
restarted
+
rebooted
+
absent
+
+
+
+
Goal state for the instances.
+
state=present: ensures instances exist, but does not guarantee any state (e.g. running). Newly-launched instances will be run by EC2.
+
state=running: state=present + ensures the instances are running
+
state=started: state=running + waits for EC2 status checks to report OK if wait=true
+
state=stopped: ensures an existing instance is stopped.
+
state=rebooted: convenience alias for state=stopped immediately followed by state=running
+
state=restarted: convenience alias for state=stopped immediately followed by state=started
+
state=terminated: ensures an existing instance is terminated.
+
state=absent: alias for state=terminated
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
+
+
+
A hash/dictionary of tags to add to the new instance or to add/remove from an existing one.
+
+
+
+
+
+ tenancy
+
+
+ string
+
+
+
+
Choices:
+
dedicated
+
default
+
+
+
+
What type of tenancy to allow an instance to use. Default is shared tenancy. Dedicated tenancy will incur additional charges.
+
+
+
+
+
+ termination_protection
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Whether to enable termination protection. This module will not terminate an instance with termination protection active, it must be turned off first.
+
+
+
+
+
+ tower_callback
+
+
+ dictionary
+
+
+
+
+
+
Preconfigured user-data to enable an instance to perform a Tower callback (Linux only).
+
Mutually exclusive with user_data.
+
For Windows instances, to enable remote access via Ansible set tower_callback.windows to true, and optionally set an admin password.
+
If using 'windows' and 'set_password', callback to Tower will not be performed but the instance will be ready to receive winrm connections from Ansible.
+
+
+
+
+
+
+ host_config_key
+
+
+ string
+
+
+
+
+
+
Host configuration secret key generated by the Tower job template.
+
+
+
+
+
+
+ job_template_id
+
+
+ string
+
+
+
+
+
+
Either the integer ID of the Tower Job Template, or the name (name supported only for Tower 3.2+).
+
+
+
+
+
+
+ tower_address
+
+
+ string
+
+
+
+
+
+
IP address or DNS name of Tower server. Must be accessible via this address from the VPC that this instance will be launched in.
+
+
+
+
+
+
+ user_data
+
+
+ string
+
+
+
+
+
+
Opaque blob of data which is made available to the ec2 instance
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ volumes
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
A list of block device mappings, by default this will always use the AMI root device so the volumes option is primarily for adding more storage.
+
A mapping contains the (optional) keys device_name, virtual_name, ebs.volume_type, ebs.volume_size, ebs.kms_key_id, ebs.iops, and ebs.delete_on_termination.
+
Set ebs.throughput value requires botocore>=1.19.27.
The subnet ID in which to launch the instance (VPC) If none is provided, amazon.aws.ec2_instance will chose the default zone of the default VPC.
+
aliases: subnet_id
+
+
+
+
+
+ wait
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
Whether or not to wait for the desired state (use wait_timeout to customize this).
+
+
+
+
+
+ wait_timeout
+
+
+ integer
+
+
+
+ Default:
600
+
+
+
How long to wait (in seconds) for the instance to finish booting/terminating.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: Terminate every running instance in a region. Use with EXTREME caution.
+ amazon.aws.ec2_instance:
+ state: absent
+ filters:
+ instance-state-name: running
+
+ - name: restart a particular instance by its ID
+ amazon.aws.ec2_instance:
+ state: restarted
+ instance_ids:
+ - i-12345678
+
+ - name: start an instance with a public IP address
+ amazon.aws.ec2_instance:
+ name: "public-compute-instance"
+ key_name: "prod-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ instance_type: c5.large
+ security_group: default
+ network:
+ assign_public_ip: true
+ image_id: ami-123456
+ tags:
+ Environment: Testing
+
+ - name: start an instance and Add EBS
+ amazon.aws.ec2_instance:
+ name: "public-withebs-instance"
+ vpc_subnet_id: subnet-5ca1ab1e
+ instance_type: t2.micro
+ key_name: "prod-ssh-key"
+ security_group: default
+ volumes:
+ - device_name: /dev/sda1
+ ebs:
+ volume_size: 16
+ delete_on_termination: true
+
+ - name: start an instance with a cpu_options
+ amazon.aws.ec2_instance:
+ name: "public-cpuoption-instance"
+ vpc_subnet_id: subnet-5ca1ab1e
+ tags:
+ Environment: Testing
+ instance_type: c4.large
+ volumes:
+ - device_name: /dev/sda1
+ ebs:
+ delete_on_termination: true
+ cpu_options:
+ core_count: 1
+ threads_per_core: 1
+
+ - name: start an instance and have it begin a Tower callback on boot
+ amazon.aws.ec2_instance:
+ name: "tower-callback-test"
+ key_name: "prod-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ security_group: default
+ tower_callback:
+ # IP or hostname of tower server
+ tower_address: 1.2.3.4
+ job_template_id: 876
+ host_config_key: '[secret config key goes here]'
+ network:
+ assign_public_ip: true
+ image_id: ami-123456
+ cpu_credit_specification: unlimited
+ tags:
+ SomeThing: "A value"
+
+ - name: start an instance with ENI (An existing ENI ID is required)
+ amazon.aws.ec2_instance:
+ name: "public-eni-instance"
+ key_name: "prod-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ network:
+ interfaces:
+ - id: "eni-12345"
+ tags:
+ Env: "eni_on"
+ volumes:
+ - device_name: /dev/sda1
+ ebs:
+ delete_on_termination: true
+ instance_type: t2.micro
+ image_id: ami-123456
+
+ - name: add second ENI interface
+ amazon.aws.ec2_instance:
+ name: "public-eni-instance"
+ network:
+ interfaces:
+ - id: "eni-12345"
+ - id: "eni-67890"
+ image_id: ami-123456
+ tags:
+ Env: "eni_on"
+ instance_type: t2.micro
+ - name: start an instance with metadata options
+ amazon.aws.ec2_instance:
+ name: "public-metadataoptions-instance"
+ vpc_subnet_id: subnet-5calable
+ instance_type: t3.small
+ image_id: ami-123456
+ tags:
+ Environment: Testing
+ metadata_options:
+ http_endpoint: enabled
+ http_tokens: optional
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ instances
+
+
+ complex
+
+
+
when wait == true
+
+
a list of ec2 instances
+
+
+
+
+
+
+
+ ami_launch_index
+
+
+ integer
+
+
+
always
+
+
The AMI launch index, which can be used to find this instance in the launch group.
+
+
+
+
+
+
+
+ architecture
+
+
+ string
+
+
+
always
+
+
The architecture of the image
+
+
Sample:
+
x86_64
+
+
+
+
+
+
+ block_device_mappings
+
+
+ complex
+
+
+
always
+
+
Any block device mapping entries for the instance.
+
+
+
+
+
+
+
+
+ device_name
+
+
+ string
+
+
+
always
+
+
The device name exposed to the instance (for example, /dev/sdh or xvdh).
+
+
Sample:
+
/dev/sdh
+
+
+
+
+
+
+
+ ebs
+
+
+ complex
+
+
+
always
+
+
Parameters used to automatically set up EBS volumes when the instance is launched.
+
+
+
+
+
+
+
+
+
+ attach_time
+
+
+ string
+
+
+
always
+
+
The time stamp when the attachment initiated.
+
+
Sample:
+
2017-03-23T22:51:24+00:00
+
+
+
+
+
+
+
+
+ delete_on_termination
+
+
+ boolean
+
+
+
always
+
+
Indicates whether the volume is deleted on instance termination.
+
+
Sample:
+
True
+
+
+
+
+
+
+
+
+ status
+
+
+ string
+
+
+
always
+
+
The attachment state.
+
+
Sample:
+
attached
+
+
+
+
+
+
+
+
+ volume_id
+
+
+ string
+
+
+
always
+
+
The ID of the EBS volume
+
+
Sample:
+
vol-12345678
+
+
+
+
+
+
+
+
+ client_token
+
+
+ string
+
+
+
always
+
+
The idempotency token you provided when you launched the instance, if applicable.
+
+
Sample:
+
mytoken
+
+
+
+
+
+
+ ebs_optimized
+
+
+ boolean
+
+
+
always
+
+
Indicates whether the instance is optimized for EBS I/O.
+
+
+
+
+
+
+
+ hypervisor
+
+
+ string
+
+
+
always
+
+
The hypervisor type of the instance.
+
+
Sample:
+
xen
+
+
+
+
+
+
+ iam_instance_profile
+
+
+ complex
+
+
+
always
+
+
The IAM instance profile associated with the instance, if applicable.
+
+
+
+
+
+
+
+
+ arn
+
+
+ string
+
+
+
always
+
+
The Amazon Resource Name (ARN) of the instance profile.
The name of the key pair, if this instance was launched with an associated key pair.
+
+
Sample:
+
my-key
+
+
+
+
+
+
+ launch_time
+
+
+ string
+
+
+
always
+
+
The time the instance was launched.
+
+
Sample:
+
2017-03-23T22:51:24+00:00
+
+
+
+
+
+
+ monitoring
+
+
+ complex
+
+
+
always
+
+
The monitoring for the instance.
+
+
+
+
+
+
+
+
+ state
+
+
+ string
+
+
+
always
+
+
Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
+
+
Sample:
+
disabled
+
+
+
+
+
+
+
+ network.source_dest_check
+
+
+ boolean
+
+
+
always
+
+
Indicates whether source/destination checking is enabled.
+
+
Sample:
+
True
+
+
+
+
+
+
+ network_interfaces
+
+
+ complex
+
+
+
always
+
+
One or more network interfaces for the instance.
+
+
+
+
+
+
+
+
+ association
+
+
+ complex
+
+
+
always
+
+
The association information for an Elastic IPv4 associated with the network interface.
+
+
+
+
+
+
+
+
+
+ ip_owner_id
+
+
+ string
+
+
+
always
+
+
The ID of the owner of the Elastic IP address.
+
+
Sample:
+
amazon
+
+
+
+
+
+
+
+
+ public_dns_name
+
+
+ string
+
+
+
always
+
+
The public DNS name.
+
+
+
+
+
+
+
+
+
+ public_ip
+
+
+ string
+
+
+
always
+
+
The public IP address or Elastic IP address bound to the network interface.
+
+
Sample:
+
1.2.3.4
+
+
+
+
+
+
+
+
+ attachment
+
+
+ complex
+
+
+
always
+
+
The network interface attachment.
+
+
+
+
+
+
+
+
+
+ attach_time
+
+
+ string
+
+
+
always
+
+
The time stamp when the attachment initiated.
+
+
Sample:
+
2017-03-23T22:51:24+00:00
+
+
+
+
+
+
+
+
+ attachment_id
+
+
+ string
+
+
+
always
+
+
The ID of the network interface attachment.
+
+
Sample:
+
eni-attach-3aff3f
+
+
+
+
+
+
+
+
+ delete_on_termination
+
+
+ boolean
+
+
+
always
+
+
Indicates whether the network interface is deleted when the instance is terminated.
+
+
Sample:
+
True
+
+
+
+
+
+
+
+
+ device_index
+
+
+ integer
+
+
+
always
+
+
The index of the device on the instance for the network interface attachment.
+
+
+
+
+
+
+
+
+
+ status
+
+
+ string
+
+
+
always
+
+
The attachment state.
+
+
Sample:
+
attached
+
+
+
+
+
+
+
+
+ description
+
+
+ string
+
+
+
always
+
+
The description.
+
+
Sample:
+
My interface
+
+
+
+
+
+
+
+ groups
+
+
+ list
+ / elements=dictionary
+
+
+
always
+
+
One or more security groups.
+
+
+
+
+
+
+
+
+
+ group_id
+
+
+ string
+
+
+
always
+
+
The ID of the security group.
+
+
Sample:
+
sg-abcdef12
+
+
+
+
+
+
+
+
+ group_name
+
+
+ string
+
+
+
always
+
+
The name of the security group.
+
+
Sample:
+
mygroup
+
+
+
+
+
+
+
+
+ ipv6_addresses
+
+
+ list
+ / elements=dictionary
+
+
+
always
+
+
One or more IPv6 addresses associated with the network interface.
+
+
+
+
+
+
+
+
+
+ ipv6_address
+
+
+ string
+
+
+
always
+
+
The IPv6 address.
+
+
Sample:
+
2001:0db8:85a3:0000:0000:8a2e:0370:7334
+
+
+
+
+
+
+
+
+ mac_address
+
+
+ string
+
+
+
always
+
+
The MAC address.
+
+
Sample:
+
00:11:22:33:44:55
+
+
+
+
+
+
+
+ network_interface_id
+
+
+ string
+
+
+
always
+
+
The ID of the network interface.
+
+
Sample:
+
eni-01234567
+
+
+
+
+
+
+
+ owner_id
+
+
+ string
+
+
+
always
+
+
The AWS account ID of the owner of the network interface.
+
+
Sample:
+
01234567890
+
+
+
+
+
+
+
+ private_ip_address
+
+
+ string
+
+
+
always
+
+
The IPv4 address of the network interface within the subnet.
+
+
Sample:
+
10.0.0.1
+
+
+
+
+
+
+
+ private_ip_addresses
+
+
+ list
+ / elements=dictionary
+
+
+
always
+
+
The private IPv4 addresses associated with the network interface.
+
+
+
+
+
+
+
+
+
+ association
+
+
+ complex
+
+
+
always
+
+
The association information for an Elastic IP address (IPv4) associated with the network interface.
+
+
+
+
+
+
+
+
+
+
+ ip_owner_id
+
+
+ string
+
+
+
always
+
+
The ID of the owner of the Elastic IP address.
+
+
Sample:
+
amazon
+
+
+
+
+
+
+
+
+
+ public_dns_name
+
+
+ string
+
+
+
always
+
+
The public DNS name.
+
+
+
+
+
+
+
+
+
+
+ public_ip
+
+
+ string
+
+
+
always
+
+
The public IP address or Elastic IP address bound to the network interface.
+
+
Sample:
+
1.2.3.4
+
+
+
+
+
+
+
+
+
+ primary
+
+
+ boolean
+
+
+
always
+
+
Indicates whether this IPv4 address is the primary private IP address of the network interface.
+
+
Sample:
+
True
+
+
+
+
+
+
+
+
+ private_ip_address
+
+
+ string
+
+
+
always
+
+
The private IPv4 address of the network interface.
+
+
Sample:
+
10.0.0.1
+
+
+
+
+
+
+
+
+ source_dest_check
+
+
+ boolean
+
+
+
always
+
+
Indicates whether source/destination checking is enabled.
+
+
Sample:
+
True
+
+
+
+
+
+
+
+ status
+
+
+ string
+
+
+
always
+
+
The status of the network interface.
+
+
Sample:
+
in-use
+
+
+
+
+
+
+
+ subnet_id
+
+
+ string
+
+
+
always
+
+
The ID of the subnet for the network interface.
+
+
Sample:
+
subnet-0123456
+
+
+
+
+
+
+
+ vpc_id
+
+
+ string
+
+
+
always
+
+
The ID of the VPC for the network interface.
+
+
Sample:
+
vpc-0123456
+
+
+
+
+
+
+
+ placement
+
+
+ complex
+
+
+
always
+
+
The location where the instance launched, if applicable.
+
+
+
+
+
+
+
+
+ availability_zone
+
+
+ string
+
+
+
always
+
+
The Availability Zone of the instance.
+
+
Sample:
+
ap-southeast-2a
+
+
+
+
+
+
+
+ group_name
+
+
+ string
+
+
+
always
+
+
The name of the placement group the instance is in (for cluster compute instances).
+
+
+
+
+
+
+
+
+ tenancy
+
+
+ string
+
+
+
always
+
+
The tenancy of the instance (if the instance is running in a VPC).
+
+
Sample:
+
default
+
+
+
+
+
+
+
+ private_dns_name
+
+
+ string
+
+
+
always
+
+
The private DNS name.
+
+
Sample:
+
ip-10-0-0-1.ap-southeast-2.compute.internal
+
+
+
+
+
+
+ private_ip_address
+
+
+ string
+
+
+
always
+
+
The IPv4 address of the network interface within the subnet.
+
+
Sample:
+
10.0.0.1
+
+
+
+
+
+
+ product_codes
+
+
+ list
+ / elements=dictionary
+
+
+
always
+
+
One or more product codes.
+
+
+
+
+
+
+
+
+ product_code_id
+
+
+ string
+
+
+
always
+
+
The product code.
+
+
Sample:
+
aw0evgkw8ef3n2498gndfgasdfsd5cce
+
+
+
+
+
+
+
+ product_code_type
+
+
+ string
+
+
+
always
+
+
The type of product code.
+
+
Sample:
+
marketplace
+
+
+
+
+
+
+
+ public_dns_name
+
+
+ string
+
+
+
always
+
+
The public DNS name assigned to the instance.
+
+
+
+
+
+
+
+ public_ip_address
+
+
+ string
+
+
+
always
+
+
The public IPv4 address assigned to the instance
+
+
Sample:
+
52.0.0.1
+
+
+
+
+
+
+ root_device_name
+
+
+ string
+
+
+
always
+
+
The device name of the root device
+
+
Sample:
+
/dev/sda1
+
+
+
+
+
+
+ root_device_type
+
+
+ string
+
+
+
always
+
+
The type of root device used by the AMI.
+
+
Sample:
+
ebs
+
+
+
+
+
+
+ security_groups
+
+
+ list
+ / elements=dictionary
+
+
+
always
+
+
One or more security groups for the instance.
+
+
+
+
+
+
+
+
+ group_id
+
+
+ string
+
+
+
always
+
+
The ID of the security group.
+
+
Sample:
+
sg-0123456
+
+
+
+
+
+
+
+ group_name
+
+
+ string
+
+
+
always
+
+
The name of the security group.
+
+
Sample:
+
my-security-group
+
+
+
+
+
+
+
+ state
+
+
+ complex
+
+
+
always
+
+
The current state of the instance.
+
+
+
+
+
+
+
+
+ code
+
+
+ integer
+
+
+
always
+
+
The low byte represents the state.
+
+
Sample:
+
16
+
+
+
+
+
+
+
+ name
+
+
+ string
+
+
+
always
+
+
The name of the state.
+
+
Sample:
+
running
+
+
+
+
+
+
+
+ state_transition_reason
+
+
+ string
+
+
+
always
+
+
The reason for the most recent state transition.
+
+
+
+
+
+
+
+ subnet_id
+
+
+ string
+
+
+
always
+
+
The ID of the subnet in which the instance is running.
+
+
Sample:
+
subnet-00abcdef
+
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
always
+
+
Any tags assigned to the instance.
+
+
+
+
+
+
+
+ virtualization_type
+
+
+ string
+
+
+
always
+
+
The type of virtualization of the AMI.
+
+
Sample:
+
hvm
+
+
+
+
+
+
+ vpc_id
+
+
+ dictionary
+
+
+
always
+
+
The ID of the VPC the instance is in.
+
+
Sample:
+
vpc-0011223344
+
+
+
+
+
+
+
+Status
+------
+
+
+Authors
+~~~~~~~
+
+- Ryan Scott Brown (@ryansb)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_key_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_key_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_key_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_key_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,9 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -53,7 +53,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -72,7 +72,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -105,7 +105,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -142,7 +142,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -208,7 +208,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -217,6 +216,26 @@
+ purge_tags
+
+
+ boolean
+
+
added in 2.1.0
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Delete any tags not specified in tags.
+
+
+
+
+
region
@@ -242,7 +261,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -270,6 +289,22 @@
+ tags
+
+
+ dictionary
+
+
added in 2.1.0
+
+
+
+
+
A dictionary of tags to set on the key pair.
+
+
+
+
+
validate_certs
@@ -283,7 +318,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -329,8 +364,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -436,6 +472,24 @@
string
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -14,6 +14,13 @@
:local:
:depth: 1
+DEPRECATED
+----------
+:Removed in collection release after
+:Why: The ec2 module is based upon a deprecated version of the AWS SDK.
+:Alternative: Use :ref:`amazon.aws.ec2_instance `.
+
+
Synopsis
--------
@@ -27,8 +34,11 @@
------------
The below requirements are needed on the host that executes this module.
-- python >= 2.6
- boto
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
+- python >= 2.6
+- python >= 3.6
Parameters
@@ -73,7 +83,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -92,7 +102,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -125,7 +135,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -212,7 +222,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -491,7 +501,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -540,7 +549,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -651,7 +660,7 @@
-
Create, terminate, start, stop or restart instances. The state 'restarted' was added in Ansible 2.2.
+
Create, terminate, start, stop or restart instances.
When state=absent, instance_ids is required.
When state=running, state=stopped or state=restarted then either instance_ids or instance_tags is required.
@@ -728,7 +737,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -966,8 +975,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -1292,11 +1302,773 @@
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ changed
+
+
+ boolean
+
+
+
always
+
+
If the EC2 instance has changed.
+
+
Sample:
+
True
+
+
+
+
+
+ instances
+
+
+ list
+
+
+
always
+
+
The instances.
+
+
+
+
+
+
+
+ ami_launch_index
+
+
+ integer
+
+
+
always
+
+
The AMI launch index, which can be used to find this instance in the launch group.
+
+
+
+
+
+
+
+ architecture
+
+
+ string
+
+
+
always
+
+
The architecture of the image.
+
+
Sample:
+
x86_64
+
+
+
+
+
+
+ block_device_mapping
+
+
+ dictionary
+
+
+
always
+
+
Any block device mapping entries for the instance.
The tenancy of the instance (if the instance is running in a VPC).
+
+
Sample:
+
default
+
+
+
+
+
+
+ virtualization_type
+
+
+ string
+
+
+
always
+
+
The virtualization type of the instance.
+
+
Sample:
+
hvm
+
+
+
+
+
+
+ vpc_id
+
+
+ string
+
+
+
always
+
+
The ID of the VPC in which the instance is running.
+
+
Sample:
+
vpc-0b6879b6ca2e9be2b
+
+
+
+
+
+
Status
------
+- This module will be removed in version 4.0.0. *[deprecated]*
+- For more information see `DEPRECATED`_.
+
+
Authors
~~~~~~~
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,9 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -143,7 +143,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -229,7 +229,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -280,7 +279,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -319,7 +318,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -332,8 +331,9 @@
.. note::
- By default, the module will return all snapshots, including public ones. To limit results to snapshots owned by the account use the filter 'owner-id'.
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_snapshot_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,8 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- python >= 2.6
-- boto
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -52,7 +53,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -71,7 +72,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -104,7 +105,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -171,7 +172,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -218,7 +219,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -252,7 +252,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -286,6 +286,7 @@
A dictionary of tags to add to the snapshot.
+
If the volume has a Name tag this will be automatically added to the snapshot.
@@ -323,7 +324,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -370,11 +371,10 @@
- Default:
0
+ Default:
600
How long before wait gives up, in seconds.
-
Specify 0 to wait forever.
@@ -386,8 +386,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_spot_instance_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_spot_instance_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_spot_instance_info_module.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_spot_instance_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,337 @@
+.. _amazon.aws.ec2_spot_instance_info_module:
+
+
+*********************************
+amazon.aws.ec2_spot_instance_info
+*********************************
+
+**Gather information about ec2 spot instance requests**
+
+
+Version added: 2.0.0
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Synopsis
+--------
+- Describes the specified Spot Instance requests.
+
+
+
+Requirements
+------------
+The below requirements are needed on the host that executes this module.
+
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
+
+
+Parameters
+----------
+
+.. raw:: html
+
+
+
+
Parameter
+
Choices/Defaults
+
Comments
+
+
+
+
+ aws_access_key
+
+
+ string
+
+
+
+
+
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_access_key, access_key
+
+
+
+
+
+ aws_ca_bundle
+
+
+ path
+
+
+
+
+
+
The location of a CA Bundle to use when validating SSL certificates.
+
Not used by boto 2 based modules.
+
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
+
+
+
+
+
+ aws_config
+
+
+ dictionary
+
+
+
+
+
+
A dictionary to modify the botocore configuration.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
aliases: aws_endpoint_url, endpoint_url
+
+
+
+
+
+ filters
+
+
+ dictionary
+
+
+
+ Default:
{}
+
+
+
A dict of filters to apply. Each dict item consists of a filter key and a filter value.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ spot_instance_request_ids
+
+
+ list
+ / elements=string
+
+
+
+
+
+
One or more Spot Instance request IDs.
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: describe the Spot Instance requests based on request IDs
+ amazon.aws.ec2_spot_instance_info:
+ spot_instance_request_ids:
+ - sir-12345678
+
+ - name: describe the Spot Instance requests and filter results based on instance type
+ amazon.aws.ec2_spot_instance_info:
+ spot_instance_request_ids:
+ - sir-12345678
+ - sir-13579246
+ - sir-87654321
+ filters:
+ launch.instance-type: t3.medium
+
+ - name: describe the Spot requests filtered using multiple filters
+ amazon.aws.ec2_spot_instance_info:
+ filters:
+ state: active
+ launch.block-device-mapping.device-name: /dev/sdb
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ spot_request
+
+
+ dictionary
+
+
+
when success
+
+
The gathered information about specified spot instance requests.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ client_token
+
+
+ string
+
+
+
+
+
+
The idempotency token you provided when you launched the instance, if applicable.
+
+
+
+
+
+ count
+
+
+ integer
+
+
+
+ Default:
1
+
+
+
Number of instances to launch.
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
The SSH key must already exist in AWS in order to use this argument.
+
Keys can be created / deleted using the amazon.aws.ec2_key module.
+
+
+
+
+
+
+ monitoring
+
+
+ dictionary
+
+
+
+
+
+
Indicates whether basic or detailed monitoring is enabled for the instance.
+
+
+
+
+
+
+
+ enabled
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
+
+
+
+
+
+
+
+ network_interfaces
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
One or more network interfaces. If you specify a network interface, you must specify subnet IDs and security group IDs using the network interface.
+
+
+
+
+
+
+
+ associate_carrier_ip_address
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Indicates whether to assign a carrier IP address to the network interface.
+
+
+
+
+
+
+
+ associate_public_ip_address
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Indicates whether to assign a public IPv4 address to an instance you launch in a VPC.
+
+
+
+
+
+
+
+ delete_on_termination
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
If set to true , the interface is deleted when the instance is terminated. You can specify true only if creating a new network interface when launching an instance.
+
+
+
+
+
+
+
+ description
+
+
+ string
+
+
+
+
+
+
The description of the network interface. Applies only if creating a network interface when launching an instance.
+
+
+
+
+
+
+
+ device_index
+
+
+ integer
+
+
+
+
+
+
The position of the network interface in the attachment order. A primary network interface has a device index of 0.
+
If you specify a network interface when launching an instance, you must specify the device index.
+
+
+
+
+
+
+
+ groups
+
+
+ list
+ / elements=string
+
+
+
+
+
+
The IDs of the security groups for the network interface. Applies only if creating a network interface when launching an instance.
+
+
+
+
+
+
+
+ interface_type
+
+
+ string
+
+
+
+
Choices:
+
interface
+
efa
+
+
+
+
The type of network interface.
+
+
+
+
+
+
+
+ ipv4_prefix_count
+
+
+ integer
+
+
+
+
+
+
The number of IPv4 delegated prefixes to be automatically assigned to the network interface
+
+
+
+
+
+
+
+ ipv4_prefixes
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
One or more IPv4 delegated prefixes to be assigned to the network interface.
+
+
+
+
+
+
+
+ ipv6_address_count
+
+
+ integer
+
+
+
+
+
+
A number of IPv6 addresses to assign to the network interface
+
+
+
+
+
+
+
+ ipv6_addresses
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
One or more IPv6 addresses to assign to the network interface.
+
+
+
+
+
+
+
+
+ ipv6address
+
+
+ string
+
+
+
+
+
+
The IPv6 address.
+
+
+
+
+
+
+
+
+ ipv6_prefix_count
+
+
+ integer
+
+
+
+
+
+
The number of IPv6 delegated prefixes to be automatically assigned to the network interface
+
+
+
+
+
+
+
+ ipv6_prefixes
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
One or more IPv6 delegated prefixes to be assigned to the network interface
+
+
+
+
+
+
+
+ network_card_index
+
+
+ integer
+
+
+
+
+
+
The index of the network card.
+
+
+
+
+
+
+
+ network_interface_id
+
+
+ string
+
+
+
+
+
+
The ID of the network interface.
+
+
+
+
+
+
+
+ private_ip_address
+
+
+ string
+
+
+
+
+
+
The private IPv4 address of the network interface
+
+
+
+
+
+
+
+ private_ip_addresses
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
One or more private IPv4 addresses to assign to the network interface
+
+
+
+
+
+
+
+ secondary_private_ip_address_count
+
+
+ integer
+
+
+
+
+
+
The number of secondary private IPv4 addresses.
+
+
+
+
+
+
+
+ subnet_id
+
+
+ string
+
+
+
+
+
+
The ID of the subnet associated with the network interface
+
+
+
+
+
+
+
+ placement
+
+
+ dictionary
+
+
+
+
+
+
The placement information for the instance.
+
+
+
+
+
+
+
+ availability_zone
+
+
+ string
+
+
+
+
+
+
The Availability Zone.
+
+
+
+
+
+
+
+ group_name
+
+
+ string
+
+
+
+
+
+
The name of the placement group.
+
+
+
+
+
+
+
+ tenancy
+
+
+ string
+
+
+
+
Choices:
+
default ←
+
dedicated
+
host
+
+
+
+
the tenancy of the host
+
+
+
+
+
+
+
+ ramdisk_id
+
+
+ string
+
+
+
+
+
+
The ID of the RAM disk.
+
+
+
+
+
+
+ security_group_ids
+
+
+ list
+ / elements=string
+
+
+
+
+
+
Security group id (or list of ids) to use with the instance.
+
+
+
+
+
+
+ security_groups
+
+
+ list
+ / elements=string
+
+
+
+
+
+
Security group name (or list of group names) to use with the instance.
+
Only supported with EC2 Classic. To launch in a VPC, use group_id
+
+
+
+
+
+
+ subnet_id
+
+
+ string
+
+
+
+
+
+
The ID of the subnet in which to launch the instance.
+
+
+
+
+
+
+ user_data
+
+
+ string
+
+
+
+
+
+
The base64-encoded user data for the instance. User data is limited to 16 KB.
+
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ spot_instance_request_ids
+
+
+ list
+ / elements=string
+
+
+
+ Default:
[]
+
+
+
List of strings with IDs of spot requests to be cancelled
+
+
+
+
+
+ spot_price
+
+
+ string
+
+
+
+
+
+
Maximum spot price to bid. If not set, a regular on-demand instance is requested.
+
A spot request is made with this maximum bid. When it is filled, the instance is started.
+
+
+
+
+
+ spot_type
+
+
+ string
+
+
+
+
Choices:
+
one-time ←
+
persistent
+
+
+
+
The type of spot request.
+
After being interrupted a persistent spot instance will be started once there is capacity to fill the request again.
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
absent
+
present ←
+
+
+
+
Whether the spot request should be created or removed.
+
When state=present, launch_specification is required.
+
When state=absent, spot_instance_request_ids is required.
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
+
+
+
A dictionary of key-value pairs for tagging the Spot Instance request on creation.
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ zone_group
+
+
+ string
+
+
+
+
+
+
Name for logical grouping of spot requests.
+
All spot instances in the request are launched in the same availability zone.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: Simple Spot Request Creation
+ amazon.aws.ec2_spot_instance:
+ launch_specification:
+ image_id: ami-123456789
+ key_name: my-keypair
+ instance_type: t2.medium
+
+ - name: Spot Request Creation with more options
+ amazon.aws.ec2_spot_instance:
+ launch_specification:
+ image_id: ami-123456789
+ key_name: my-keypair
+ instance_type: t2.medium
+ subnet_id: subnet-12345678
+ block_device_mappings:
+ - device_name: /dev/sdb
+ ebs:
+ delete_on_termination: True
+ volume_type: gp3
+ volume_size: 5
+ - device_name: /dev/sdc
+ ebs:
+ delete_on_termination: True
+ volume_type: io2
+ volume_size: 30
+ network_interfaces:
+ - associate_public_ip_address: False
+ delete_on_termination: True
+ device_index: 0
+ placement:
+ availability_zone: us-west-2a
+ monitoring:
+ enabled: False
+ spot_price: 0.002
+ tags:
+ Environment: Testing
+
+ - name: Spot Request Termination
+ amazon.aws.ec2_spot_instance:
+ spot_instance_request_ids: ['sir-12345678', 'sir-abcdefgh']
+ state: absent
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ cancelled_spot_request
+
+
+ string
+
+
+
always
+
+
The spot instance request details that has been cancelled
+
+
Sample:
+
Spot requests with IDs: sir-1234abcd have been cancelled
+
+
+Status
+------
+
+
+Authors
+~~~~~~~
+
+- Sri Rachana Achyuthuni (@srirachanaachyuthuni)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -27,10 +27,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -56,7 +55,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -75,7 +74,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -108,7 +107,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -145,7 +144,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -161,7 +160,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -211,7 +209,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -233,7 +231,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -245,8 +243,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_tag_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -27,10 +27,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -56,7 +55,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -75,7 +74,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -108,7 +107,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -145,7 +144,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -161,7 +160,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -231,7 +229,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -291,7 +289,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -303,8 +301,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,9 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -143,7 +143,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -175,7 +175,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -209,7 +208,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -231,7 +230,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -243,8 +242,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -273,6 +273,15 @@
filters:
attachment.status: attached
+ # Gather information about all volumes related to an EC2 Instance
+ # register information to `volumes` variable
+ # Replaces functionality of `amazon.aws.ec2_vol` - `state: list`
+ - name: get volume(s) info from EC2 Instance
+ amazon.aws.ec2_vol_info:
+ filters:
+ attachment.instance-id: "i-000111222333"
+ register: volumes
+
Return Values
@@ -310,15 +319,17 @@
attachment_set
- dictionary
+ list
+ / elements=dictionary
Information about the volume attachments.
+
This was changed in version 2.0.0 from a dictionary to a list of dictionaries.
The throughput that the volume supports, in MiB/s.
+
+
Sample:
+
131
+
+
+
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vol_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,9 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3>=1.16.33
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -177,7 +177,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -277,7 +277,29 @@
-
The volume won't be modify unless this key is true.
+
The volume won't be modified unless this key is true.
+
+
+
+
+
+ multi_attach
+
+
+ boolean
+
+
added in 2.0.0
+
+
+
Choices:
+
no
+
yes
+
+
+
+
If set to yes, Multi-Attach will be enabled when creating the volume.
+
When you create a new volume, Multi-Attach is disabled by default.
+
This parameter is supported with io1 and io2 volumes only.
@@ -307,7 +329,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -361,7 +382,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -435,6 +456,7 @@
Volume throughput in MB/s.
This parameter is only valid for gp3 volumes.
Valid range is from 125 to 1000.
+
Requires at least botocore version 1.19.27.
@@ -453,7 +475,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -520,8 +542,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -566,7 +589,6 @@
# Example: Launch an instance and then add a volume if not already attached
# * Volume will be created with the given name if not already created.
# * Nothing will happen if the volume is already attached.
- # * Requires Ansible 2.0
- amazon.aws.ec2:
keypair: "{{ keypair }}"
@@ -608,6 +630,14 @@
volume_type: gp2
device_name: /dev/xvdf
+ # Create new volume with multi-attach enabled
+ - amazon.aws.ec2_vol:
+ zone: XXXXXX
+ multi_attach: true
+ volume_size: 4
+ volume_type: io1
+ iops: 102
+
# Attach an existing volume to instance. The volume will be deleted upon instance termination.
- amazon.aws.ec2_vol:
instance: XXXXXX
@@ -660,7 +690,7 @@
a dictionary containing detailed attributes of the volume
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,9 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -180,7 +180,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -211,7 +211,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -245,7 +244,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -267,7 +266,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -279,8 +278,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -322,12 +322,12 @@
-
Key
+
Key
Returned
Description
-
+
changed
@@ -342,20 +342,200 @@
+
+
+ dhcp_config
+
+
+ list
+
+
+
always
+
+
The boto2-style DHCP options created, associated or found. Provided for consistency with ec2_vpc_dhcp_option's `new_config`.
+
+
+
+
+
+
+
+ domain-name
+
+
+ list
+
+
+
when available
+
+
The domain name for hosts in the DHCP option sets
+
+
Sample:
+
['my.example.com']
+
+
+
+
+
+
+ domain-name-servers
+
+
+ list
+
+
+
when available
+
+
The IP addresses of up to four domain name servers, or AmazonProvidedDNS.
+
+
Sample:
+
['10.0.0.1', '10.0.1.1']
+
+
+
+
+
+
+ netbios-name-servers
+
+
+ list
+
+
+
when available
+
+
The IP addresses of up to four NetBIOS name servers.
+
+
Sample:
+
['10.0.0.1', '10.0.1.1']
+
+
+
+
+
+
+ netbios-node-type
+
+
+ string
+
+
+
when available
+
+
The NetBIOS node type (1, 2, 4, or 8).
+
+
Sample:
+
2
+
+
+
+
+ ntp-servers
+
+
+ list
+
+
+
when available
+
+
The IP addresses of up to four Network Time Protocol (NTP) servers.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_dhcp_option_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,8 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -52,7 +53,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -71,7 +72,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -104,7 +105,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -206,7 +207,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -288,7 +289,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -297,6 +297,26 @@
+ purge_tags
+
+
+ boolean
+
+
added in 2.0.0
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
Remove tags not listed in tags.
+
+
+
+
+
region
@@ -322,7 +342,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -379,7 +399,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -406,8 +426,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -486,12 +507,12 @@
-
Key
+
Key
Returned
Description
-
+
changed
@@ -506,9 +527,79 @@
+
+
+ dhcp_config
+
+
+ dictionary
+
+
+
when available
+
+
The boto2-style DHCP options created, associated or found
+
+
+
+
+
- dhcp_options_id
+ domain-name
+
+
+ list
+
+
+
when available
+
+
The domain name for hosts in the DHCP option sets
+
+
Sample:
+
['my.example.com']
+
+
+
+
+
+
+ domain-name-servers
+
+
+ list
+
+
+
when available
+
+
The IP addresses of up to four domain name servers, or AmazonProvidedDNS.
+
+
Sample:
+
['10.0.0.1', '10.0.1.1']
+
+
+
+
+
+
+ netbios-name-servers
+
+
+ list
+
+
+
when available
+
+
The IP addresses of up to four NetBIOS name servers.
+
+
Sample:
+
['10.0.0.1', '10.0.1.1']
+
+
+
+
+
+
+ netbios-node-type
string
@@ -516,25 +607,132 @@
when available
-
The aws resource id of the primary DCHP options set created, found or removed
+
The NetBIOS node type (1, 2, 4, or 8).
+
Sample:
+
2
+
- new_options
+ ntp-servers
+
+
+ list
+
+
+
when available
+
+
The IP addresses of up to four Network Time Protocol (NTP) servers.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+
aliases: aws_profile
+
+
+
+
+
+ query
+
+
+ string
+
+
+
+
Choices:
+
services
+
endpoints
+
+
+
+
Defaults to endpoints.
+
Specifies the query action to take.
+
query=endpoints returns information about AWS VPC endpoints.
+
Retrieving information about services using query=services has been deprecated in favour of the amazon.aws.ec2_vpc_endpoint_service_info module.
+
The query option has been deprecated and will be removed after 2022-12-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ vpc_endpoint_ids
+
+
+ list
+ / elements=string
+
+
+
+
+
+
The IDs of specific endpoints to retrieve the details of.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Simple example of listing all support AWS services for VPC endpoints
+ - name: List supported AWS endpoint services
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: services
+ region: ap-southeast-2
+ register: supported_endpoint_services
+
+ - name: Get all endpoints in ap-southeast-2 region
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ register: existing_endpoints
+
+ - name: Get all endpoints with specific filters
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ filters:
+ vpc-id:
+ - vpc-12345678
+ - vpc-87654321
+ vpc-endpoint-state:
+ - available
+ - pending
+ register: existing_endpoints
+
+ - name: Get details on specific endpoint
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ vpc_endpoint_ids:
+ - vpce-12345678
+ register: endpoint_details
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
A list of endpoints that match the query. Each endpoint has the keys creation_timestamp, policy_document, route_table_ids, service_name, state, vpc_endpoint_id, vpc_id.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ client_token
+
+
+ string
+
+
+
+
+
+
Optional client token to ensure idempotency
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
Option when creating an endpoint. If not provided AWS will utilise a default policy which provides full access to the service.
+
This option has been deprecated and will be removed after 2022-12-01 to maintain the existing functionality please use the policy option and a file lookup.
+
aliases: policy_path
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+
aliases: aws_profile
+
+
+
+
+
+ purge_tags
+
+
+ boolean
+
+
added in 1.5.0
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Delete any tags not specified in the task that are on the instance. This means you have to specify all the desired tags on each task affecting an instance.
List of one or more route table ids to attach to the endpoint. A route is added to the route table with the destination of the endpoint if provided.
+
+
+
+
+
+ security_token
+
+
+ string
+
+
+
+
+
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ service
+
+
+ string
+
+
+
+
+
+
An AWS supported vpc endpoint service. Use the amazon.aws.ec2_vpc_endpoint_info module to describe the supported endpoint services.
+
Required when creating an endpoint.
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
present ←
+
absent
+
+
+
+
present to ensure resource is created.
+
absent to remove resource
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
added in 1.5.0
+
+
+
+
+
A dict of tags to apply to the internet gateway.
+
To remove all tags set tags={} and purge_tags=true.
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ vpc_endpoint_id
+
+
+ string
+
+
+
+
+
+
One or more vpc endpoint ids to remove from the AWS account
+
+
+
+
+
+ vpc_endpoint_security_groups
+
+
+ list
+ / elements=string
+
+
added in 2.1.0
+
+
+
+
+
The list of security groups to attach to the endpoint.
+
Requires vpc_endpoint_type=GatewayLoadBalancer or vpc_endpoint_type=Interface.
+
+
+
+
+
+ vpc_endpoint_subnets
+
+
+ list
+ / elements=string
+
+
added in 2.1.0
+
+
+
+
+
The list of subnets to attach to the endpoint.
+
Requires vpc_endpoint_type=GatewayLoadBalancer or vpc_endpoint_type=Interface.
+
+
+
+
+
+ vpc_endpoint_type
+
+
+ string
+
+
added in 1.5.0
+
+
+
Choices:
+
Interface
+
Gateway ←
+
GatewayLoadBalancer
+
+
+
+
The type of endpoint.
+
+
+
+
+
+ vpc_id
+
+
+ string
+
+
+
+
+
+
Required when creating a VPC endpoint.
+
+
+
+
+
+ wait
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
When specified, will wait for either available status for state present. Unfortunately this is ignored for delete actions due to a difference in behaviour from AWS.
+
+
+
+
+
+ wait_timeout
+
+
+ integer
+
+
+
+ Default:
320
+
+
+
Used in conjunction with wait. Number of seconds to wait for status. Unfortunately this is ignored for delete actions due to a difference in behaviour from AWS.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: Create new vpc endpoint with a json template for policy
+ amazon.aws.ec2_vpc_endpoint:
+ state: present
+ region: ap-southeast-2
+ vpc_id: vpc-12345678
+ service: com.amazonaws.ap-southeast-2.s3
+ policy: " {{ lookup( 'template', 'endpoint_policy.json.j2') }} "
+ route_table_ids:
+ - rtb-12345678
+ - rtb-87654321
+ register: new_vpc_endpoint
+
+ - name: Create new vpc endpoint with the default policy
+ amazon.aws.ec2_vpc_endpoint:
+ state: present
+ region: ap-southeast-2
+ vpc_id: vpc-12345678
+ service: com.amazonaws.ap-southeast-2.s3
+ route_table_ids:
+ - rtb-12345678
+ - rtb-87654321
+ register: new_vpc_endpoint
+
+ - name: Create new vpc endpoint with json file
+ amazon.aws.ec2_vpc_endpoint:
+ state: present
+ region: ap-southeast-2
+ vpc_id: vpc-12345678
+ service: com.amazonaws.ap-southeast-2.s3
+ policy_file: "{{ role_path }}/files/endpoint_policy.json"
+ route_table_ids:
+ - rtb-12345678
+ - rtb-87654321
+ register: new_vpc_endpoint
+
+ - name: Delete newly created vpc endpoint
+ amazon.aws.ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: "{{ new_vpc_endpoint.result['VpcEndpointId'] }}"
+ region: ap-southeast-2
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ service_names
+
+
+ list
+ / elements=string
+
+
+
+
+
+
A list of service names which can be used to narrow the search results.
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Simple example of listing all supported AWS services for VPC endpoints
+ - name: List supported AWS endpoint services
+ amazon.aws.ec2_vpc_endpoint_service_info:
+ region: ap-southeast-2
+ register: supported_endpoint_services
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ service_details
+
+
+ complex
+
+
+
success
+
+
Detailed information about the AWS VPC endpoint services.
+
+
+
+
+
+
+
+ acceptance_required
+
+
+ boolean
+
+
+
success
+
+
Whether VPC endpoint connection requests to the service must be accepted by the service owner.
+
+
+
+
+
+
+
+ availability_zones
+
+
+ list
+
+
+
success
+
+
The Availability Zones in which the service is available.
+
+
+
+
+
+
+
+ base_endpoint_dns_names
+
+
+ list
+
+
+
success
+
+
The DNS names for the service.
+
+
+
+
+
+
+
+ manages_vpc_endpoints
+
+
+ boolean
+
+
+
success
+
+
Whether the service manages its VPC endpoints.
+
+
+
+
+
+
+
+ owner
+
+
+ string
+
+
+
success
+
+
The AWS account ID of the service owner.
+
+
+
+
+
+
+
+ private_dns_name
+
+
+ string
+
+
+
success
+
+
The private DNS name for the service.
+
+
+
+
+
+
+
+ private_dns_name_verification_state
+
+
+ string
+
+
+
success
+
+
The verification state of the VPC endpoint service.
+
Consumers of an endpoint service cannot use the private name when the state is not verified.
+
+
+
+
+
+
+
+ private_dns_names
+
+
+ list
+
+
+
success
+
+
The private DNS names assigned to the VPC endpoint service.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ convert_tags
+
+
+ boolean
+
+
added in 1.3.0
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Convert tags from boto3 format (list of dictionaries) to the standard dictionary format.
+
This currently defaults to False. The default will be changed to True after 2022-06-22.
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
Get details of specific Internet Gateway ID. Provide this value as a list.
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: Gather information about all Internet Gateways for an account or profile
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ register: igw_info
+
+ - name: Gather information about a filtered list of Internet Gateways
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ filters:
+ "tag:Name": "igw-123"
+ register: igw_info
+
+ - name: Gather information about a specific internet gateway by InternetGatewayId
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ internet_gateway_ids: igw-c1231234
+ register: igw_info
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ changed
+
+
+ boolean
+
+
+
always
+
+
True if listing the internet gateways succeeds.
+
+
Sample:
+
false
+
+
+
+
+
+ internet_gateways
+
+
+ complex
+
+
+
always
+
+
The internet gateways for the account.
+
+
+
+
+
+
+
+ attachments
+
+
+ complex
+
+
+
state=present
+
+
Any VPCs attached to the internet gateway
+
+
+
+
+
+
+
+
+ state
+
+
+ string
+
+
+
state=present
+
+
The current state of the attachment
+
+
Sample:
+
available
+
+
+
+
+
+
+
+ vpc_id
+
+
+ string
+
+
+
state=present
+
+
The ID of the VPC.
+
+
Sample:
+
vpc-02123b67
+
+
+
+
+
+
+
+ internet_gateway_id
+
+
+ string
+
+
+
state=present
+
+
The ID of the internet gateway
+
+
Sample:
+
igw-2123634d
+
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
state=present
+
+
Any tags assigned to the internet gateway
+
+
Sample:
+
{'tags': {'Ansible': 'Test'}}
+
+
+
+
+
+
+
+Status
+------
+
+
+Authors
+~~~~~~~
+
+- Nick Aslanidis (@naslanidis)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_igw_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_igw_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_igw_module.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_igw_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,429 @@
+.. _amazon.aws.ec2_vpc_igw_module:
+
+
+**********************
+amazon.aws.ec2_vpc_igw
+**********************
+
+**Manage an AWS VPC Internet gateway**
+
+
+Version added: 1.0.0
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Synopsis
+--------
+- Manage an AWS VPC Internet gateway
+
+
+
+Requirements
+------------
+The below requirements are needed on the host that executes this module.
+
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
+
+
+Parameters
+----------
+
+.. raw:: html
+
+
+
+
Parameter
+
Choices/Defaults
+
Comments
+
+
+
+
+ aws_access_key
+
+
+ string
+
+
+
+
+
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_access_key, access_key
+
+
+
+
+
+ aws_ca_bundle
+
+
+ path
+
+
+
+
+
+
The location of a CA Bundle to use when validating SSL certificates.
+
Not used by boto 2 based modules.
+
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
+
+
+
+
+
+ aws_config
+
+
+ dictionary
+
+
+
+
+
+
A dictionary to modify the botocore configuration.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
aliases: aws_endpoint_url, endpoint_url
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
present ←
+
absent
+
+
+
+
Create or terminate the IGW
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
+
+
+
A dict of tags to apply to the internet gateway.
+
To remove all tags set tags={} and purge_tags=true.
+
aliases: resource_tags
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ vpc_id
+
+
+ string
+ / required
+
+
+
+
+
+
The VPC ID for the VPC in which to manage the Internet Gateway.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ # Ensure that the VPC has an Internet Gateway.
+ # The Internet Gateway ID is can be accessed via {{igw.gateway_id}} for use in setting up NATs etc.
+ - name: Create Internet gateway
+ amazon.aws.ec2_vpc_igw:
+ vpc_id: vpc-abcdefgh
+ state: present
+ register: igw
+
+ - name: Create Internet gateway with tags
+ amazon.aws.ec2_vpc_igw:
+ vpc_id: vpc-abcdefgh
+ state: present
+ tags:
+ Tag1: tag1
+ Tag2: tag2
+ register: igw
+
+ - name: Delete Internet gateway
+ amazon.aws.ec2_vpc_igw:
+ state: absent
+ vpc_id: vpc-abcdefgh
+ register: vpc_igw_delete
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ changed
+
+
+ boolean
+
+
+
always
+
+
If any changes have been made to the Internet Gateway.
+
+
Sample:
+
{'changed': False}
+
+
+
+
+
+ gateway_id
+
+
+ string
+
+
+
state=present
+
+
The unique identifier for the Internet Gateway.
+
+
Sample:
+
{'gateway_id': 'igw-XXXXXXXX'}
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
state=present
+
+
The tags associated the Internet Gateway.
+
+
Sample:
+
{'tags': {'Ansible': 'Test'}}
+
+
+
+
+
+ vpc_id
+
+
+ string
+
+
+
state=present
+
+
The VPC ID associated with the Internet Gateway.
+
+
Sample:
+
{'vpc_id': 'vpc-XXXXXXXX'}
+
+
+
+
+
+
+Status
+------
+
+
+Authors
+~~~~~~~
+
+- Robert Estelle (@erydo)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_nat_gateway_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_nat_gateway_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_nat_gateway_info_module.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_nat_gateway_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,580 @@
+.. _amazon.aws.ec2_vpc_nat_gateway_info_module:
+
+
+***********************************
+amazon.aws.ec2_vpc_nat_gateway_info
+***********************************
+
+**Retrieves AWS VPC Managed Nat Gateway details using AWS methods.**
+
+
+Version added: 1.0.0
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Synopsis
+--------
+- Gets various details related to AWS VPC Managed Nat Gateways
+- This module was called ``ec2_vpc_nat_gateway_facts`` before Ansible 2.9. The usage did not change.
+
+
+
+Requirements
+------------
+The below requirements are needed on the host that executes this module.
+
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
+
+
+Parameters
+----------
+
+.. raw:: html
+
+
+
+
Parameter
+
Choices/Defaults
+
Comments
+
+
+
+
+ aws_access_key
+
+
+ string
+
+
+
+
+
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_access_key, access_key
+
+
+
+
+
+ aws_ca_bundle
+
+
+ path
+
+
+
+
+
+
The location of a CA Bundle to use when validating SSL certificates.
+
Not used by boto 2 based modules.
+
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
+
+
+
+
+
+ aws_config
+
+
+ dictionary
+
+
+
+
+
+
A dictionary to modify the botocore configuration.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
List of specific nat gateway IDs to fetch details for.
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Simple example of listing all nat gateways
+ - name: List all managed nat gateways in ap-southeast-2
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ register: all_ngws
+
+ - name: Debugging the result
+ ansible.builtin.debug:
+ msg: "{{ all_ngws.result }}"
+
+ - name: Get details on specific nat gateways
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ nat_gateway_ids:
+ - nat-1234567891234567
+ - nat-7654321987654321
+ region: ap-southeast-2
+ register: specific_ngws
+
+ - name: Get all nat gateways with specific filters
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ filters:
+ state: ['pending']
+ register: pending_ngws
+
+ - name: Get nat gateways with specific filter
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ filters:
+ subnet-id: subnet-12345678
+ state: ['available']
+ register: existing_nat_gateways
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ changed
+
+
+ boolean
+
+
+
always
+
+
True if listing the internet gateways succeeds
+
+
+
+
+
+
+ result
+
+
+ list
+
+
+
suceess
+
+
The result of the describe, converted to ansible snake case style.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ client_token
+
+
+ string
+
+
+
+
+
+
Optional unique token to be used during create to ensure idempotency. When specifying this option, ensure you specify the eip_address parameter as well otherwise any subsequent runs will fail.
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
aliases: aws_endpoint_url, endpoint_url
+
+
+
+
+
+ eip_address
+
+
+ string
+
+
+
+
+
+
The elastic IP address of the EIP you want attached to this NAT Gateway. If this is not passed and the allocation_id is not passed, an EIP is generated for this NAT Gateway.
+
+
+
+
+
+ if_exist_do_not_create
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
if a NAT Gateway exists already in the subnet_id, then do not create a new one.
+
+
+
+
+
+ nat_gateway_id
+
+
+ string
+
+
+
+
+
+
The id AWS dynamically allocates to the NAT Gateway on creation. This is required when the absent option is present.
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
You should use this with the wait option. Since you can not release an address while a delete operation is happening.
+
+
+
+
+
+ security_token
+
+
+ string
+
+
+
+
+
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
present ←
+
absent
+
+
+
+
Ensure NAT Gateway is present or absent.
+
+
+
+
+
+ subnet_id
+
+
+ string
+
+
+
+
+
+
The id of the subnet to create the NAT Gateway in. This is required with the present option.
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
added in 1.4.0
+
+
+
+
+
A dict of tags to apply to the NAT gateway.
+
To remove all tags set tags={} and purge_tags=true.
+
aliases: resource_tags
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ wait
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Wait for operation to complete before returning.
+
+
+
+
+
+ wait_timeout
+
+
+ integer
+
+
+
+ Default:
320
+
+
+
How many seconds to wait for an operation to complete before timing out.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: Create new nat gateway with client token.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ eip_address: 52.1.1.1
+ region: ap-southeast-2
+ client_token: abcd-12345678
+ register: new_nat_gateway
+
+ - name: Create new nat gateway using an allocation-id.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ allocation_id: eipalloc-12345678
+ region: ap-southeast-2
+ register: new_nat_gateway
+
+ - name: Create new nat gateway, using an EIP address and wait for available status.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ eip_address: 52.1.1.1
+ wait: true
+ region: ap-southeast-2
+ register: new_nat_gateway
+
+ - name: Create new nat gateway and allocate new EIP.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ wait: true
+ region: ap-southeast-2
+ register: new_nat_gateway
+
+ - name: Create new nat gateway and allocate new EIP if a nat gateway does not yet exist in the subnet.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ wait: true
+ region: ap-southeast-2
+ if_exist_do_not_create: true
+ register: new_nat_gateway
+
+ - name: Delete nat gateway using discovered nat gateways from facts module.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: absent
+ region: ap-southeast-2
+ wait: true
+ nat_gateway_id: "{{ item.NatGatewayId }}"
+ release_eip: true
+ register: delete_nat_gateway_result
+ loop: "{{ gateways_to_remove.result }}"
+
+ - name: Delete nat gateway and wait for deleted status.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: absent
+ nat_gateway_id: nat-12345678
+ wait: true
+ wait_timeout: 500
+ region: ap-southeast-2
+
+ - name: Delete nat gateway and release EIP.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: absent
+ nat_gateway_id: nat-12345678
+ release_eip: true
+ wait: yes
+ wait_timeout: 300
+ region: ap-southeast-2
+
+ - name: Create new nat gateway using allocation-id and tags.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ allocation_id: eipalloc-12345678
+ region: ap-southeast-2
+ tags:
+ Tag1: tag1
+ Tag2: tag2
+ register: new_nat_gateway
+
+ - name: Update tags without purge
+ amazon.aws.ec2_vpc_nat_gateway:
+ subnet_id: subnet-12345678
+ allocation_id: eipalloc-12345678
+ region: ap-southeast-2
+ purge_tags: no
+ tags:
+ Tag3: tag3
+ wait: yes
+ register: update_tags_nat_gateway
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ create_time
+
+
+ string
+
+
+
In all cases.
+
+
The ISO 8601 date time format in UTC.
+
+
Sample:
+
2016-03-05T05:19:20.282000+00:00'
+
+
+
+
+
+ nat_gateway_addresses
+
+
+ string
+
+
+
In all cases.
+
+
List of dictionaries containing the public_ip, network_interface_id, private_ip, and allocation_id.
+
+
+Status
+------
+
+
+Authors
+~~~~~~~
+
+- Allen Sanabria (@linuxdynasty)
+- Jon Hadfield (@jonhadfield)
+- Karen Cheng (@Etherdaemon)
+- Alina Buzachis (@alinabuzachis)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,10 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -55,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -74,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -107,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -144,7 +143,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -175,7 +174,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -209,7 +207,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -231,7 +229,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -259,8 +257,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_net_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,10 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -54,7 +53,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -73,7 +72,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -106,7 +105,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -213,7 +212,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -283,7 +282,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -336,7 +334,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -412,7 +410,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -424,8 +422,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_route_table_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_route_table_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_route_table_info_module.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_route_table_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,764 @@
+.. _amazon.aws.ec2_vpc_route_table_info_module:
+
+
+***********************************
+amazon.aws.ec2_vpc_route_table_info
+***********************************
+
+**Gather information about ec2 VPC route tables in AWS**
+
+
+Version added: 1.0.0
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Synopsis
+--------
+- Gather information about ec2 VPC route tables in AWS
+- This module was called ``ec2_vpc_route_table_facts`` before Ansible 2.9. The usage did not change.
+
+
+
+Requirements
+------------
+The below requirements are needed on the host that executes this module.
+
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
+
+
+Parameters
+----------
+
+.. raw:: html
+
+
+
+
Parameter
+
Choices/Defaults
+
Comments
+
+
+
+
+ aws_access_key
+
+
+ string
+
+
+
+
+
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_access_key, access_key
+
+
+
+
+
+ aws_ca_bundle
+
+
+ path
+
+
+
+
+
+
The location of a CA Bundle to use when validating SSL certificates.
+
Not used by boto 2 based modules.
+
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
+
+
+
+
+
+ aws_config
+
+
+ dictionary
+
+
+
+
+
+
A dictionary to modify the botocore configuration.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ - name: Gather information about all VPC route tables
+ amazon.aws.ec2_vpc_route_table_info:
+
+ - name: Gather information about a particular VPC route table using route table ID
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ route-table-id: rtb-00112233
+
+ - name: Gather information about any VPC route table with a tag key Name and value Example
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ "tag:Name": Example
+
+ - name: Gather information about any VPC route table within VPC with ID vpc-abcdef00
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ vpc-id: vpc-abcdef00
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
aliases: aws_endpoint_url, endpoint_url
+
+
+
+
+
+ lookup
+
+
+ string
+
+
+
+
Choices:
+
tag ←
+
id
+
+
+
+
Look up route table by either tags or by route table ID. Non-unique tag lookup will fail. If no tags are specified then no lookup for an existing route table is performed and a new route table will be created. To change tags of a route table you must look up by id.
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+
aliases: aws_profile
+
+
+
+
+
+ propagating_vgw_ids
+
+
+ list
+ / elements=string
+
+
+
+
+
+
Enable route propagation from virtual gateways specified by ID.
+
+
+
+
+
+ purge_routes
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
Purge existing routes that are not found in routes.
+
+
+
+
+
+ purge_subnets
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+ Default:
"true"
+
+
+
Purge existing subnets that are not found in subnets. Ignored unless the subnets option is supplied.
+
+
+
+
+
+ purge_tags
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Purge existing tags that are not found in route table.
List of routes in the route table. Routes are specified as dicts containing the keys 'dest' and one of 'gateway_id', 'instance_id', 'network_interface_id', or 'vpc_peering_connection_id'. If 'gateway_id' is specified, you can refer to the VPC's IGW by using the value 'igw'. Routes are required for present states.
+
+
+
+
+
+ security_token
+
+
+ string
+
+
+
+
+
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
present ←
+
absent
+
+
+
+
Create or destroy the VPC route table.
+
+
+
+
+
+ subnets
+
+
+ list
+ / elements=string
+
+
+
+
+
+
An array of subnets to add to this route table. Subnets may be specified by either subnet ID, Name tag, or by a CIDR such as '10.0.0.0/24'.
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
+
+
+
A dictionary of resource tags of the form: { tag1: value1, tag2: value2 }. Tags are used to uniquely identify route tables within a VPC when the route_table_id is not supplied.
+
aliases: resource_tags
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ vpc_id
+
+
+ string
+
+
+
+
+
+
VPC ID of the VPC in which to create the route table.
+
Required when state=present or lookup=tag.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+ # Basic creation example:
+ - name: Set up public subnet route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: vpc-1245678
+ region: us-west-1
+ tags:
+ Name: Public
+ subnets:
+ - "{{ jumpbox_subnet.subnet.id }}"
+ - "{{ frontend_subnet.subnet.id }}"
+ - "{{ vpn_subnet.subnet_id }}"
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: "{{ igw.gateway_id }}"
+ register: public_route_table
+
+ - name: Set up NAT-protected route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: vpc-1245678
+ region: us-west-1
+ tags:
+ Name: Internal
+ subnets:
+ - "{{ application_subnet.subnet.id }}"
+ - 'Database Subnet'
+ - '10.0.0.0/8'
+ routes:
+ - dest: 0.0.0.0/0
+ instance_id: "{{ nat.instance_id }}"
+ register: nat_route_table
+
+ - name: delete route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: vpc-1245678
+ region: us-west-1
+ route_table_id: "{{ route_table.id }}"
+ lookup: id
+ state: absent
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ route_table
+
+
+ complex
+
+
+
always
+
+
Route Table result
+
+
+
+
+
+
+
+ associations
+
+
+ complex
+
+
+
always
+
+
List of subnets associated with the route table
+
+
+
+
+
+
+
+
+ main
+
+
+ boolean
+
+
+
always
+
+
Whether this is the main route table
+
+
+
+
+
+
+
+
+ route_table_association_id
+
+
+ string
+
+
+
always
+
+
ID of association between route table and subnet
+
+
Sample:
+
rtbassoc-ab47cfc3
+
+
+
+
+
+
+
+ route_table_id
+
+
+ string
+
+
+
always
+
+
ID of the route table
+
+
Sample:
+
rtb-bf779ed7
+
+
+
+
+
+
+
+ subnet_id
+
+
+ string
+
+
+
always
+
+
ID of the subnet
+
+
Sample:
+
subnet-82055af9
+
+
+
+
+
+
+
+ id
+
+
+ string
+
+
+
always
+
+
ID of the route table (same as route_table_id for backwards compatibility)
+
+
Sample:
+
rtb-bf779ed7
+
+
+
+
+
+
+ propagating_vgws
+
+
+ list
+
+
+
always
+
+
List of Virtual Private Gateways propagating routes
+
+
+
+
+
+
+
+ route_table_id
+
+
+ string
+
+
+
always
+
+
ID of the route table
+
+
Sample:
+
rtb-bf779ed7
+
+
+
+
+
+
+ routes
+
+
+ complex
+
+
+
always
+
+
List of routes in the route table
+
+
+
+
+
+
+
+
+ destination_cidr_block
+
+
+ string
+
+
+
always
+
+
CIDR block of destination
+
+
Sample:
+
10.228.228.0/22
+
+
+
+
+
+
+
+ gateway_id
+
+
+ string
+
+
+
when gateway is local or internet gateway
+
+
ID of the gateway
+
+
Sample:
+
local
+
+
+
+
+
+
+
+ instance_id
+
+
+ string
+
+
+
when the route is via an EC2 instance
+
+
ID of a NAT instance
+
+
Sample:
+
i-abcd123456789
+
+
+
+
+
+
+
+ instance_owner_id
+
+
+ string
+
+
+
when the route is via an EC2 instance
+
+
AWS account owning the NAT instance
+
+
Sample:
+
123456789012
+
+
+
+
+
+
+
+ nat_gateway_id
+
+
+ string
+
+
+
when the route is via a NAT gateway
+
+
ID of the NAT gateway
+
+
Sample:
+
local
+
+
+
+
+
+
+
+ origin
+
+
+ string
+
+
+
always
+
+
mechanism through which the route is in the table
+
+
Sample:
+
CreateRouteTable
+
+
+
+
+
+
+
+ state
+
+
+ string
+
+
+
always
+
+
state of the route
+
+
Sample:
+
active
+
+
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
always
+
+
Tags applied to the route table
+
+
Sample:
+
{'Name': 'Public route table', 'Public': 'true'}
+
+
+
+
+
+
+ vpc_id
+
+
+ string
+
+
+
always
+
+
ID for the VPC in which the route lives
+
+
Sample:
+
vpc-6e2d2407
+
+
+
+
+
+
+
+Status
+------
+
+
+Authors
+~~~~~~~
+
+- Robert Estelle (@erydo)
+- Rob White (@wimnat)
+- Will Thames (@willthames)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_info_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_info_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_info_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_info_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -26,10 +26,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- botocore
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -55,7 +54,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -74,7 +73,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -107,7 +106,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -144,7 +143,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -175,7 +174,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -209,7 +207,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -248,7 +246,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -260,8 +258,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.ec2_vpc_subnet_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,9 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -72,7 +72,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -91,7 +91,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -124,7 +124,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -192,7 +192,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -243,7 +243,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -296,7 +295,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -353,7 +352,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -416,8 +415,9 @@
.. note::
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.elb_classic_lb_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.elb_classic_lb_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.elb_classic_lb_module.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.elb_classic_lb_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,1896 @@
+.. _amazon.aws.elb_classic_lb_module:
+
+
+*************************
+amazon.aws.elb_classic_lb
+*************************
+
+**creates, updates or destroys an Amazon ELB.**
+
+
+Version added: 1.0.0
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Synopsis
+--------
+- Creates, updates or destroys an Amazon Elastic Load Balancer (ELB).
+- This module was renamed from ``amazon.aws.ec2_elb_lb`` to :ref:`amazon.aws.elb_classic_lb ` in version 2.1.0 of the amazon.aws collection.
+
+
+
+Requirements
+------------
+The below requirements are needed on the host that executes this module.
+
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
+
+
+Parameters
+----------
+
+.. raw:: html
+
+
+
+
Parameter
+
Choices/Defaults
+
Comments
+
+
+
+
+ access_logs
+
+
+ dictionary
+
+
+
+
+
+
A dictionary of access logs configuration settings (see examples).
+
+
+
+
+
+
+ enabled
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to True will configure delivery of access logs to an S3 bucket.
+
When set to False will disable delivery of access logs.
+
+
+
+
+
+
+ interval
+
+
+ integer
+
+
+
+
Choices:
+
5
+
60 ←
+
+
+
+
The interval for publishing the access logs to S3.
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: ec2_secret_key, secret_key
+
+
+
+
+
+ connection_draining_timeout
+
+
+ integer
+
+
+
+
+
+
Wait a specified timeout allowing connections to drain before terminating an instance.
+
Set to 0 to disable connection draining.
+
+
+
+
+
+ cross_az_load_balancing
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Distribute load across all configured Availability Zones.
+
Defaults to false.
+
+
+
+
+
+ debug_botocore_endpoint_logs
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Use a botocore.endpoint logger to parse the unique (rather than total) "resource:action" API calls made during a task, outputing the set to the resource_actions key in the task results. Use the aws_resource_action callback to output to total list made during a playbook. The ANSIBLE_DEBUG_BOTOCORE_LOGS environment variable may also be used.
+
+
+
+
+
+ ec2_url
+
+
+ string
+
+
+
+
+
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
aliases: aws_endpoint_url, endpoint_url
+
+
+
+
+
+ health_check
+
+
+ dictionary
+
+
+
+
+
+
A dictionary of health check configuration settings (see examples).
+
+
+
+
+
+
+ healthy_threshold
+
+
+ integer
+ / required
+
+
+
+
+
+
The number of consecutive health checks successes required before moving the instance to the Healthy state.
+
+
+
+
+
+
+ interval
+
+
+ integer
+ / required
+
+
+
+
+
+
The approximate interval, in seconds, between health checks of an individual instance.
+
+
+
+
+
+
+ ping_path
+
+
+ string
+
+
+
+
+
+
The URI path which the ELB health check will query when performing a health check.
+
Required when ping_protocol=HTTP or ping_protocol=HTTPS.
+
+
+
+
+
+
+ ping_port
+
+
+ integer
+ / required
+
+
+
+
+
+
The TCP port to which the ELB will connect when performing a health check.
+
+
+
+
+
+
+ ping_protocol
+
+
+ string
+ / required
+
+
+
+
+
+
The protocol which the ELB health check will use when performing a health check.
+
Valid values are 'HTTP', 'HTTPS', 'TCP' and 'SSL'.
+
+
+
+
+
+
+ timeout
+
+
+ integer
+ / required
+
+
+
+
+
+
The amount of time, in seconds, after which no response means a failed health check.
+
aliases: response_timeout
+
+
+
+
+
+
+ unhealthy_threshold
+
+
+ integer
+ / required
+
+
+
+
+
+
The number of consecutive health check failures required before moving the instance to the Unhealthy state.
+
+
+
+
+
+
+ idle_timeout
+
+
+ integer
+
+
+
+
+
+
ELB connections from clients and to servers are timed out after this amount of time.
+
+
+
+
+
+ instance_ids
+
+
+ list
+ / elements=string
+
+
+
+
+
+
List of instance ids to attach to this ELB.
+
+
+
+
+
+ listeners
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
List of ports/protocols for this ELB to listen on (see examples).
+
Required when state=present and the ELB doesn't exist.
+
+
+
+
+
+
+ instance_port
+
+
+ integer
+ / required
+
+
+
+
+
+
The port on which the instance is listening.
+
+
+
+
+
+
+ instance_protocol
+
+
+ string
+
+
+
+
+
+
The protocol to use for routing traffic to instances.
+
Valid values are HTTP, HTTPS, TCP, or SSL,
+
+
+
+
+
+
+ load_balancer_port
+
+
+ integer
+ / required
+
+
+
+
+
+
The port on which the load balancer will listen.
+
+
+
+
+
+
+ protocol
+
+
+ string
+ / required
+
+
+
+
+
+
The transport protocol to use for routing.
+
Valid values are HTTP, HTTPS, TCP, or SSL.
+
+
+
+
+
+
+ proxy_protocol
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Enable proxy protocol for the listener.
+
Beware, ELB controls for the proxy protocol are based on the instance_port. If you have multiple listeners talking to the same instance_port, this will affect all of them.
+
+
+
+
+
+
+ ssl_certificate_id
+
+
+ string
+
+
+
+
+
+
The Amazon Resource Name (ARN) of the SSL certificate.
+
+
+
+
+
+
+ name
+
+
+ string
+ / required
+
+
+
+
+
+
The name of the ELB.
+
The name of an ELB must be less than 32 characters and unique per-region per-account.
+
+
+
+
+
+ profile
+
+
+ string
+
+
+
+
+
+
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
+
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
+
aliases: aws_profile
+
+
+
+
+
+ purge_instance_ids
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Purge existing instance ids on ELB that are not found in instance_ids.
+
+
+
+
+
+ purge_listeners
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
Purge existing listeners on ELB that are not found in listeners.
+
+
+
+
+
+ purge_subnets
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Purge existing subnets on the ELB that are not found in subnets.
+
Because it is not permitted to add multiple subnets from the same availability zone, subnets to be purged will be removed before new subnets are added. This may cause a brief outage if you try to replace all subnets at once.
+
+
+
+
+
+ purge_tags
+
+
+ boolean
+
+
added in 2.1.0
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
Whether to remove existing tags that aren't passed in the tags parameter.
+
+
+
+
+
+ purge_zones
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Purge existing availability zones on ELB that are not found in zones.
If you choose to update your scheme with a different value the ELB will be destroyed and a new ELB created.
+
Defaults to scheme=internet-facing.
+
+
+
+
+
+ security_group_ids
+
+
+ list
+ / elements=string
+
+
+
+
+
+
A list of security groups to apply to the ELB.
+
+
+
+
+
+ security_group_names
+
+
+ list
+ / elements=string
+
+
+
+
+
+
A list of security group names to apply to the ELB.
+
+
+
+
+
+ security_token
+
+
+ string
+
+
+
+
+
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
If profile is set this parameter is ignored.
+
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
+
aliases: aws_security_token, access_token
+
+
+
+
+
+ state
+
+
+ string
+ / required
+
+
+
+
Choices:
+
absent
+
present
+
+
+
+
Create or destroy the ELB.
+
+
+
+
+
+ stickiness
+
+
+ dictionary
+
+
+
+
+
+
A dictionary of stickiness policy settings.
+
Policy will be applied to all listeners (see examples).
+
+
+
+
+
+
+ cookie
+
+
+ string
+
+
+
+
+
+
The name of the application cookie used for stickiness.
+
Required if enabled=true and type=application.
+
Ignored if enabled=false.
+
+
+
+
+
+
+ enabled
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When enabled=false session stickiness will be disabled for all listeners.
+
+
+
+
+
+
+ expiration
+
+
+ integer
+
+
+
+
+
+
The time period, in seconds, after which the cookie should be considered stale.
+
If this parameter is not specified, the stickiness session lasts for the duration of the browser session.
+
Ignored if enabled=false.
+
+
+
+
+
+
+ type
+
+
+ string
+
+
+
+
Choices:
+
application
+
loadbalancer
+
+
+
+
The type of stickiness policy to apply.
+
Required if enabled=true.
+
Ignored if enabled=false.
+
+
+
+
+
+
+ subnets
+
+
+ list
+ / elements=string
+
+
+
+
+
+
A list of VPC subnets to use when creating the ELB.
+
Mutually exclusive with zones.
+
+
+
+
+
+ tags
+
+
+ dictionary
+
+
+
+
+
+
A dictionary of tags to apply to the ELB.
+
To delete all tags supply an empty dict ({}) and set purge_tags=true.
+
+
+
+
+
+ validate_certs
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes ←
+
+
+
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
+
+
+
+
+
+ wait
+
+
+ boolean
+
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
When creating, deleting, or adding instances to an ELB, if wait=true Ansible will wait for both the load balancer and related network interfaces to finish creating/deleting.
+
Support for waiting when adding instances was added in release 2.1.0.
+
+
+
+
+
+ wait_timeout
+
+
+ integer
+
+
+
+ Default:
180
+
+
+
Used in conjunction with wait. Number of seconds to wait for the ELB to be terminated.
+
A maximum of 600 seconds (10 minutes) is allowed.
+
+
+
+
+
+ zones
+
+
+ list
+ / elements=string
+
+
+
+
+
+
List of availability zones to enable on this ELB.
+
Mutually exclusive with subnets.
+
+
+
+
+
+
+Notes
+-----
+
+.. note::
+ - The ec2_elb fact currently set by this module has been deprecated and will no longer be set after release 4.0.0 of the collection.
+ - If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
+
+
+
+Examples
+--------
+
+.. code-block:: yaml
+
+ # Note: None of these examples set aws_access_key, aws_secret_key, or region.
+ # It is assumed that their matching environment variables are set.
+
+ # Basic provisioning example (non-VPC)
+
+ - amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http # options are http, https, ssl, tcp
+ load_balancer_port: 80
+ instance_port: 80
+ proxy_protocol: True
+ - protocol: https
+ load_balancer_port: 443
+ instance_protocol: http # optional, defaults to value of protocol setting
+ instance_port: 80
+ # ssl certificate required for https or ssl
+ ssl_certificate_id: "arn:aws:iam::123456789012:server-certificate/company/servercerts/ProdServerCert"
+
+ # Internal ELB example
+
+ - amazon.aws.elb_classic_lb:
+ name: "test-vpc"
+ scheme: internal
+ state: present
+ instance_ids:
+ - i-abcd1234
+ purge_instance_ids: true
+ subnets:
+ - subnet-abcd1234
+ - subnet-1a2b3c4d
+ listeners:
+ - protocol: http # options are http, https, ssl, tcp
+ load_balancer_port: 80
+ instance_port: 80
+
+ # Configure a health check and the access logs
+ - amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ health_check:
+ ping_protocol: http # options are http, https, ssl, tcp
+ ping_port: 80
+ ping_path: "/index.html" # not required for tcp or ssl
+ response_timeout: 5 # seconds
+ interval: 30 # seconds
+ unhealthy_threshold: 2
+ healthy_threshold: 10
+ access_logs:
+ interval: 5 # minutes (defaults to 60)
+ s3_location: "my-bucket" # This value is required if access_logs is set
+ s3_prefix: "logs"
+
+ # Ensure ELB is gone
+ - amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: absent
+
+ # Ensure ELB is gone and wait for check (for default timeout)
+ - amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: absent
+ wait: yes
+
+ # Ensure ELB is gone and wait for check with timeout value
+ - amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: absent
+ wait: yes
+ wait_timeout: 600
+
+ # Normally, this module will purge any listeners that exist on the ELB
+ # but aren't specified in the listeners parameter. If purge_listeners is
+ # false it leaves them alone
+ - amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ purge_listeners: no
+
+ # Normally, this module will leave availability zones that are enabled
+ # on the ELB alone. If purge_zones is true, then any extraneous zones
+ # will be removed
+ - amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ purge_zones: yes
+
+ # Creates a ELB and assigns a list of subnets to it.
+ - amazon.aws.elb_classic_lb:
+ state: present
+ name: 'New ELB'
+ security_group_ids: 'sg-123456, sg-67890'
+ subnets: 'subnet-123456,subnet-67890'
+ purge_subnets: yes
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+
+ # Create an ELB with connection draining, increased idle timeout and cross availability
+ # zone load balancing
+ - amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ connection_draining_timeout: 60
+ idle_timeout: 300
+ cross_az_load_balancing: "yes"
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+
+ # Create an ELB with load balancer stickiness enabled
+ - amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ stickiness:
+ type: loadbalancer
+ enabled: yes
+ expiration: 300
+
+ # Create an ELB with application stickiness enabled
+ - amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ stickiness:
+ type: application
+ enabled: yes
+ cookie: SESSIONID
+
+ # Create an ELB and add tags
+ - amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ tags:
+ Name: "New ELB"
+ stack: "production"
+ client: "Bob"
+
+ # Delete all tags from an ELB
+ - amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ tags: {}
+
+
+
+Return Values
+-------------
+Common return values are documented `here `_, the following are the fields unique to this module:
+
+.. raw:: html
+
+
+
+
Key
+
Returned
+
Description
+
+
+
+
+ elb
+
+
+ dictionary
+
+
+
always
+
+
Load Balancer attributes
+
+
+
+
+
+
+
+ app_cookie_policy
+
+
+ string
+
+
+
when state is not 'absent'
+
+
The name of the policy used to control if the ELB is using a application cookie stickiness policy.
+
+
Sample:
+
ec2-elb-lb-AppCookieStickinessPolicyType
+
+
+
+
+
+
+ backends
+
+
+ string
+
+
+
when state is not 'absent'
+
+
A description of the backend policy applied to the ELB (instance-port:policy-name).
+
+
Sample:
+
8181:ProxyProtocol-policy
+
+
+
+
+
+
+ connection_draining_timeout
+
+
+ integer
+
+
+
when state is not 'absent'
+
+
The maximum time, in seconds, to keep the existing connections open before deregistering the instances.
+
+
Sample:
+
25
+
+
+
+
+
+
+ cross_az_load_balancing
+
+
+ string
+
+
+
when state is not 'absent'
+
+
Either 'yes' if cross-AZ load balancing is enabled, or 'no' if cross-AZ load balancing is disabled.
A dictionary describing the health check used for the ELB.
+
+
+
+
+
+
+
+
+ healthy_threshold
+
+
+ integer
+
+
+
+
+
The number of consecutive successful health checks before marking an instance as healthy.
+
+
Sample:
+
2
+
+
+
+
+
+
+
+ interval
+
+
+ integer
+
+
+
+
+
The time, in seconds, between each health check.
+
+
Sample:
+
10
+
+
+
+
+
+
+
+ target
+
+
+ string
+
+
+
+
+
The Protocol, Port, and for HTTP(S) health checks the path tested by the health check.
+
+
Sample:
+
TCP:22
+
+
+
+
+
+
+
+ timeout
+
+
+ integer
+
+
+
+
+
The time, in seconds, after which an in progress health check is considered failed due to a timeout.
+
+
Sample:
+
5
+
+
+
+
+
+
+
+ unhealthy_threshold
+
+
+ integer
+
+
+
+
+
The number of consecutive failed health checks before marking an instance as unhealthy.
+
+
Sample:
+
2
+
+
+
+
+
+
+
+ hosted_zone_id
+
+
+ string
+
+
+
when state is not 'absent'
+
+
The ID of the Amazon Route 53 hosted zone for the load balancer.
+
+
Sample:
+
Z35SXDOTRQ7X7K
+
+
+
+
+
+
+ hosted_zone_name
+
+
+ string
+
+
+
when state is not 'absent'
+
+
The DNS name of the load balancer when using a custom hostname.
+
+
Sample:
+
ansible-module.example
+
+
+
+
+
+
+ idle_timeout
+
+
+ integer
+
+
+
when state is not 'absent'
+
+
The length of of time before an idle connection is dropped by the ELB.
+
+
Sample:
+
50
+
+
+
+
+
+
+ in_service_count
+
+
+ integer
+
+
+
when state is not 'absent'
+
+
The number of instances attached to the ELB in an in-service state.
+
+
Sample:
+
1
+
+
+
+
+
+
+ instance_health
+
+
+ list
+ / elements=dictionary
+
+
+
when state is not 'absent'
+
+
A list of dictionaries describing the health of each instance attached to the ELB.
+
+
+
+
+
+
+
+
+ description
+
+
+ string
+
+
+
when state is not 'absent'
+
+
A human readable description of why the instance is not in service.
+
+
Sample:
+
N/A
+
+
+
+
+
+
+
+ instance_id
+
+
+ string
+
+
+
when state is not 'absent'
+
+
The ID of the instance.
+
+
Sample:
+
i-03dcc8953a03d6435
+
+
+
+
+
+
+
+ reason_code
+
+
+ string
+
+
+
when state is not 'absent'
+
+
A code describing why the instance is not in service.
+
+
Sample:
+
N/A
+
+
+
+
+
+
+
+ state
+
+
+ string
+
+
+
when state is not 'absent'
+
+
The current service state of the instance.
+
+
Sample:
+
InService
+
+
+
+
+
+
+
+ instances
+
+
+ list
+ / elements=string
+
+
+
when state is not 'absent'
+
+
A list of the IDs of instances attached to the ELB.
+
+
Sample:
+
['i-03dcc8953a03d6435']
+
+
+
+
+
+
+ lb_cookie_policy
+
+
+ string
+
+
+
when state is not 'absent'
+
+
The name of the policy used to control if the ELB is using a cookie stickiness policy.
+
+
Sample:
+
ec2-elb-lb-LBCookieStickinessPolicyType
+
+
+
+
+
+
+ listeners
+
+
+ list
+ / elements=list
+
+
+
when state is not 'absent'
+
+
A list of lists describing the listeners attached to the ELB.
+
The nested list contains the listener port, the instance port, the listener protoco, the instance port, and where appropriate the ID of the SSL certificate for the port.
The number of instances attached to the ELB in an unknown state.
+
+
+
+
+
+
+
+ zones
+
+
+ list
+ / elements=string
+
+
+
when state is not 'absent'
+
+
A list of the AWS regions in which the ELB is running.
+
+
Sample:
+
['us-east-1b', 'us-east-1a']
+
+
+
+
+
+
+
+Status
+------
+
+
+Authors
+~~~~~~~
+
+- Jim Dalton (@jsdalton)
+- Mark Chappell (@tremble)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.s3_bucket_module.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.s3_bucket_module.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/amazon.aws.s3_bucket_module.rst 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/amazon.aws.s3_bucket_module.rst 2021-11-12 18:13:53.000000000 +0000
@@ -25,9 +25,9 @@
------------
The below requirements are needed on the host that executes this module.
-- boto
-- boto3
-- python >= 2.6
+- python >= 3.6
+- boto3 >= 1.15.0
+- botocore >= 1.18.0
Parameters
@@ -53,7 +53,7 @@
-
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+
AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_access_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_access_key, access_key
@@ -72,7 +72,7 @@
The location of a CA Bundle to use when validating SSL certificates.
-
Only used for boto3 based modules.
+
Not used by boto 2 based modules.
Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally.
@@ -105,7 +105,7 @@
-
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+
AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
If profile is set this parameter is ignored.
Passing the aws_secret_key and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: ec2_secret_key, secret_key
@@ -152,6 +152,28 @@
+ delete_object_ownership
+
+
+ boolean
+
+
added in 2.0.0
+
+
+
Choices:
+
no ←
+
yes
+
+
+
+
Delete bucket's ownership controls.
+
This option cannot be used together with a object_ownership definition.
+
Management of bucket ownership controls requires botocore>=1.18.11.
+
+
+
+
+
delete_public_access
@@ -182,7 +204,7 @@
-
Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
+
URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints). Ignored for modules where region is required. Must be specified for all other modules if region is not used. If not set then the value of the EC2_URL environment variable, if any, is used.
aliases: aws_endpoint_url, endpoint_url
@@ -259,6 +281,30 @@
+ object_ownership
+
+
+ string
+
+
added in 2.0.0
+
+
+
Choices:
+
BucketOwnerPreferred
+
ObjectWriter
+
+
+
+
Allow bucket's ownership controls.
+
BucketOwnerPreferred - Objects uploaded to the bucket change ownership to the bucket owner if the objects are uploaded with the bucket-owner-full-control canned ACL.
+
ObjectWriter - The uploading account will own the object if the object is uploaded with the bucket-owner-full-control canned ACL.
+
This option cannot be used together with a delete_object_ownership definition.
+
Management of bucket ownership controls requires botocore>=1.18.11.
+
+
+
+
+
policy
@@ -268,7 +314,7 @@
-
The JSON policy as a string.
+
The JSON policy as a string. Set to the string "null" to force the absence of a policy.
@@ -283,7 +329,6 @@
-
Uses a boto profile. Only works with boto >= 2.24.0.
Using profile will override aws_access_key, aws_secret_key and security_token and support for passing them at the same time as profile has been deprecated.
aws_access_key, aws_secret_key and security_token will be made mutually exclusive with profile after 2022-06-01.
aliases: aws_profile
@@ -471,7 +516,7 @@
-
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+
AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
If profile is set this parameter is ignored.
Passing the security_token and profile options at the same time has been deprecated and the options will be made mutually exclusive after 2022-06-01.
aliases: aws_security_token, access_token
@@ -527,7 +572,7 @@
-
When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+
When set to "no", SSL certificates will not be validated for communication with the AWS APIs.
@@ -559,8 +604,9 @@
.. note::
- If ``requestPayment``, ``policy``, ``tagging`` or ``versioning`` operations/API aren't implemented by the endpoint, module doesn't fail if each parameter satisfies the following condition. *requester_pays* is ``False``, *policy*, *tags*, and *versioning* are ``None``.
- If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence ``AWS_URL`` or ``EC2_URL``, ``AWS_PROFILE`` or ``AWS_DEFAULT_PROFILE``, ``AWS_ACCESS_KEY_ID`` or ``AWS_ACCESS_KEY`` or ``EC2_ACCESS_KEY``, ``AWS_SECRET_ACCESS_KEY`` or ``AWS_SECRET_KEY`` or ``EC2_SECRET_KEY``, ``AWS_SECURITY_TOKEN`` or ``EC2_SECURITY_TOKEN``, ``AWS_REGION`` or ``EC2_REGION``, ``AWS_CA_BUNDLE``
- - Ansible uses the boto configuration file (typically ~/.boto) if no credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
- - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be configured in the boto config file
+ - When no credentials are explicitly provided the AWS SDK (boto3) that Ansible uses will fall back to its configuration files (typically ``~/.aws/credentials``). See https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html for more information.
+ - Modules based on the original AWS SDK (boto) may read their default configuration from different files. See https://boto.readthedocs.io/en/latest/boto_config_tut.html for more information.
+ - ``AWS_REGION`` or ``EC2_REGION`` can be typically be used to specify the AWS region, when required, but this can also be defined in the configuration files.
@@ -629,7 +675,7 @@
public_access:
block_public_acls: true
ignore_public_acls: true
- ## keys == 'false' can be ommited, undefined keys defaults to 'false'
+ ## keys == 'false' can be omitted, undefined keys defaults to 'false'
# block_public_policy: false
# restrict_public_buckets: false
@@ -639,6 +685,24 @@
state: present
delete_public_access: true
+ # Create a bucket with object ownership controls set to ObjectWriter
+ - amazon.aws.s3_bucket:
+ name: mys3bucket
+ state: present
+ object_ownership: ObjectWriter
+
+ # Delete onwership controls from bucket
+ - amazon.aws.s3_bucket:
+ name: mys3bucket
+ state: present
+ delete_object_ownership: true
+
+ # Delete a bucket policy from bucket
+ - amazon.aws.s3_bucket:
+ name: mys3bucket
+ state: present
+ policy: "null"
+
@@ -650,3 +714,4 @@
~~~~~~~
- Rob White (@wimnat)
+- Aubin Bikouo (@abikouo)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/docsite/extra-docs.yml ansible-5.2.0/ansible_collections/amazon/aws/docs/docsite/extra-docs.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/docsite/extra-docs.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/docsite/extra-docs.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,5 @@
+---
+sections:
+ - title: Scenario Guide
+ toctree:
+ - guide_aws
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/docs/docsite/rst/guide_aws.rst ansible-5.2.0/ansible_collections/amazon/aws/docs/docsite/rst/guide_aws.rst
--- ansible-4.10.0/ansible_collections/amazon/aws/docs/docsite/rst/guide_aws.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/docs/docsite/rst/guide_aws.rst 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,301 @@
+.. _ansible_collections.amazon.aws.docsite.aws_intro:
+
+*************************
+Amazon Web Services Guide
+*************************
+
+Then ``amazon.aws`` collection contains a number of modules and plugins for controlling Amazon Web Services (AWS). This guide explains how to use the modules and inventory scripts to automate your AWS resources with Ansible.
+
+.. contents::
+ :local:
+
+Requirements for the AWS modules are minimal.
+
+All of the modules require and are tested against recent versions of botocore and boto3. Starting with the 2.0 AWS collection releases, it is generally the policy of the collections to support the versions of these libraries released 12 months prior to the most recent major collection revision. Individual modules may require a more recent library version to support specific features or may require the boto library, check the module documentation for the minimum required version for each module. You must have the boto3 Python module installed on your control machine. You can install these modules from your OS distribution or using the python package installer: ``pip install boto3``.
+
+Starting with the 2.0 releases of both collections, Python 2.7 support will be ended in accordance with AWS' `end of Python 2.7 support `_ and Python 3.6 or greater will be required.
+
+Whereas classically Ansible will execute tasks in its host loop against multiple remote machines, most cloud-control steps occur on your local machine with reference to the regions to control.
+
+In your playbook steps we'll typically be using the following pattern for provisioning steps::
+
+ - hosts: localhost
+ gather_facts: False
+ tasks:
+ - ...
+
+.. _ansible_collections.amazon.aws.docsite.aws_authentication:
+
+Authentication
+``````````````
+
+Authentication with the AWS-related modules is handled by either
+specifying your access and secret key as ENV variables or module arguments.
+
+For environment variables::
+
+ export AWS_ACCESS_KEY_ID='AK123'
+ export AWS_SECRET_ACCESS_KEY='abc123'
+
+For storing these in a vars_file, ideally encrypted with ansible-vault::
+
+ ---
+ aws_access_key: "--REMOVED--"
+ aws_secret_key: "--REMOVED--"
+
+Note that if you store your credentials in vars_file, you need to refer to them in each AWS-module. For example::
+
+ - amazon.aws.ec2_instance:
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ key_name: "example-ssh-key"
+ image_id: "..."
+
+Or they can be specified using "module_defaults" at the top of a playbook.::
+
+ # demo_setup.yml
+
+ - hosts: localhost
+ module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ region: '{{ region }}'
+ tasks:
+ - amazon.aws.ec2_instance:
+ key_name: "example-ssh-key"
+ image_id: "..."
+
+Credentials can also be accessed from a `Credentials Profile `_.::
+
+ - amazon.aws.ec2_instance:
+ aws_profile: default
+ key_name: "example-ssh-key"
+ image_id: "..."
+
+.. _ansible_collections.amazon.aws.docsite.aws_provisioning:
+
+Provisioning
+````````````
+
+The ec2_instance module provisions and de-provisions instances within EC2.
+
+An example of creating an instance with a public IP assigned follows.
+
+The "name" parameter will create a "tag:Name" on the instance. Additional tags can be specified with the "tags" parameter.::
+
+ # demo_setup.yml
+
+ - hosts: localhost
+ gather_facts: False
+
+ tasks:
+
+ - name: Provision an EC2 instance with a public IP address
+ amazon.aws.ec2_instance:
+ name: Demo
+ key_name: "example-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ instance_type: c5.large
+ security_group: default
+ network:
+ assign_public_ip: true
+ image_id: ami-123456
+ tags:
+ Environment: Testing
+ register: result
+
+The data about the instance that has been created is being saved by the "register" keyword in the variable named "result".
+
+From this, we'll use the add_host module to dynamically create a host group consisting of these new instances. This facilitates performing configuration actions on the hosts immediately in a subsequent task.::
+
+ # demo_setup.yml
+
+ - hosts: localhost
+ gather_facts: False
+
+ tasks:
+
+ - name: Provision an EC2 instance with a public IP address
+ amazon.aws.ec2_instance:
+ name: Demo
+ key_name: "example-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ instance_type: c5.large
+ security_group: default
+ network:
+ assign_public_ip: true
+ image_id: ami-123456
+ tags:
+ Environment: Testing
+ register: result
+
+ - name: Add all instance public IPs to host group
+ add_host: hostname={{ item.public_ip }} groups=ec2hosts
+ loop: "{{ result.instances }}"
+
+With the host group now created, a second play at the bottom of the same provisioning playbook file might now have some configuration steps::
+
+ # demo_setup.yml
+
+ - name: Provision a set of instances
+ hosts: localhost
+ # ... AS ABOVE ...
+
+ - hosts: ec2hosts
+ name: configuration play
+ user: ec2-user
+ gather_facts: true
+
+ tasks:
+
+ - name: Check NTP service
+ service: name=ntpd state=started
+
+.. _ansible_collections.amazon.aws.docsite.aws_security_groups:
+
+Security Groups
+```````````````
+
+Security groups on AWS are stateful. The response of a request from your instance is allowed to flow in regardless of inbound security group rules and vice-versa.
+In case you only want allow traffic with AWS S3 service, you need to fetch the current IP ranges of AWS S3 for one region and apply them as an egress rule.::
+
+ - name: fetch raw ip ranges for aws s3
+ set_fact:
+ raw_s3_ranges: "{{ lookup('aws_service_ip_ranges', region='eu-central-1', service='S3', wantlist=True) }}"
+
+ - name: prepare list structure for ec2_group module
+ set_fact:
+ s3_ranges: "{{ s3_ranges | default([]) + [{'proto': 'all', 'cidr_ip': item, 'rule_desc': 'S3 Service IP range'}] }}"
+ loop: "{{ raw_s3_ranges }}"
+
+ - name: set S3 IP ranges to egress rules
+ ec2_group:
+ name: aws_s3_ip_ranges
+ description: allow outgoing traffic to aws S3 service
+ region: eu-central-1
+ state: present
+ vpc_id: vpc-123456
+ purge_rules: true
+ purge_rules_egress: true
+ rules: []
+ rules_egress: "{{ s3_ranges }}"
+ tags:
+ Name: aws_s3_ip_ranges
+
+.. _ansible_collections.amazon.aws.docsite.aws_host_inventory:
+
+Host Inventory
+``````````````
+
+Once your nodes are spun up, you'll probably want to talk to them again. With a cloud setup, it's best to not maintain a static list of cloud hostnames
+in text files. Rather, the best way to handle this is to use the aws_ec2 inventory plugin. See :ref:`dynamic_inventory`.
+
+The plugin will also return instances that were created outside of Ansible and allow Ansible to manage them.
+
+.. _ansible_collections.amazon.aws.docsite.aws_tags_and_groups:
+
+Tags And Groups And Variables
+`````````````````````````````
+
+When using the inventory plugin, you can configure extra inventory structure based on the metadata returned by AWS.
+
+For instance, you might use ``keyed_groups`` to create groups from instance tags::
+
+ plugin: aws_ec2
+ keyed_groups:
+ - prefix: tag
+ key: tags
+
+
+You can then target all instances with a "class" tag where the value is "webserver" in a play::
+
+ - hosts: tag_class_webserver
+ tasks:
+ - ping
+
+You can also use these groups with 'group_vars' to set variables that are automatically applied to matching instances. See :ref:`splitting_out_vars`.
+
+.. _ansible_collections.amazon.aws.docsite.aws_pull:
+
+Autoscaling with Ansible Pull
+`````````````````````````````
+
+Amazon Autoscaling features automatically increase or decrease capacity based on load. There are also Ansible modules shown in the cloud documentation that
+can configure autoscaling policy.
+
+When nodes come online, it may not be sufficient to wait for the next cycle of an ansible command to come along and configure that node.
+
+To do this, pre-bake machine images which contain the necessary ansible-pull invocation. Ansible-pull is a command line tool that fetches a playbook from a git server and runs it locally.
+
+One of the challenges of this approach is that there needs to be a centralized way to store data about the results of pull commands in an autoscaling context.
+For this reason, the autoscaling solution provided below in the next section can be a better approach.
+
+Read :ref:`ansible-pull` for more information on pull-mode playbooks.
+
+.. _ansible_collections.amazon.aws.docsite.aws_autoscale:
+
+Autoscaling with Ansible Tower
+``````````````````````````````
+
+:ref:`ansible_tower` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call
+a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way
+to reconfigure ephemeral nodes. See the Tower install and product documentation for more details.
+
+A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less information has to be shared
+with remote hosts.
+
+.. _ansible_collections.amazon.aws.docsite.aws_cloudformation_example:
+
+Ansible With (And Versus) CloudFormation
+````````````````````````````````````````
+
+CloudFormation is a Amazon technology for defining a cloud stack as a JSON or YAML document.
+
+Ansible modules provide an easier to use interface than CloudFormation in many examples, without defining a complex JSON/YAML document.
+This is recommended for most users.
+
+However, for users that have decided to use CloudFormation, there is an Ansible module that can be used to apply a CloudFormation template
+to Amazon.
+
+When using Ansible with CloudFormation, typically Ansible will be used with a tool like Packer to build images, and CloudFormation will launch
+those images, or ansible will be invoked through user data once the image comes online, or a combination of the two.
+
+Please see the examples in the Ansible CloudFormation module for more details.
+
+.. _ansible_collections.amazon.aws.docsite.aws_image_build:
+
+AWS Image Building With Ansible
+```````````````````````````````
+
+Many users may want to have images boot to a more complete configuration rather than configuring them entirely after instantiation. To do this,
+one of many programs can be used with Ansible playbooks to define and upload a base image, which will then get its own AMI ID for usage with
+the ec2 module or other Ansible AWS modules such as ec2_asg or the cloudformation module. Possible tools include Packer, aminator, and Ansible's
+ec2_ami module.
+
+Generally speaking, we find most users using Packer.
+
+See the Packer documentation of the `Ansible local Packer provisioner `_ and `Ansible remote Packer provisioner `_.
+
+If you do not want to adopt Packer at this time, configuring a base-image with Ansible after provisioning (as shown above) is acceptable.
+
+.. _ansible_collections.amazon.aws.docsite.aws_next_steps:
+
+Next Steps: Explore Modules
+```````````````````````````
+
+Ansible ships with lots of modules for configuring a wide array of EC2 services. Browse the "Cloud" category of the module
+documentation for a full list with examples.
+
+.. seealso::
+
+ :ref:`list_of_collections`
+ Browse existing collections, modules, and plugins
+ :ref:`working_with_playbooks`
+ An introduction to playbooks
+ :ref:`playbooks_delegation`
+ Delegation, useful for working with loud balancers, clouds, and locally executed steps.
+ `User Mailing List `_
+ Have a question? Stop by the google group!
+ `irc.libera.chat `_
+ #ansible IRC chat channel
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/FILES.json ansible-5.2.0/ansible_collections/amazon/aws/FILES.json
--- ansible-4.10.0/ansible_collections/amazon/aws/FILES.json 2021-09-16 17:32:03.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/FILES.json 2021-11-12 18:17:28.000000000 +0000
@@ -8,4448 +8,5666 @@
"format": 1
},
{
- "name": "tests",
+ "name": "plugins",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/utils",
+ "name": "plugins/doc_fragments",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/utils/shippable",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/doc_fragments/aws_region.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "074b3f366d8214f956b0aff167e9940e08ab7fc2f697815eff50021069a8b708",
"format": 1
},
{
- "name": "tests/utils/shippable/shippable.sh",
+ "name": "plugins/doc_fragments/aws.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "839b8cdcff0abd70ce9ff698b10f868917d46aecb132b173d8ff5d06c6aef3c1",
+ "chksum_sha256": "1fe863fec67f821d63a1403588f2beeda82d6630d8a265a2101665fa0fc03db8",
"format": 1
},
{
- "name": "tests/utils/shippable/timing.py",
+ "name": "plugins/doc_fragments/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ebb7d3553349747ad41d80899ed353e13cf32fcbecbb6566cf36e9d2bc33703e",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/utils/shippable/timing.sh",
+ "name": "plugins/doc_fragments/aws_credentials.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f3f3cc03a997cdba719b0542fe668fc612451841cbe840ab36865f30aa54a1bd",
+ "chksum_sha256": "5bf58fccfb29994200623e8e2122544477c3e649b1527fd6fb683e3e90b3de15",
"format": 1
},
{
- "name": "tests/utils/shippable/aws.sh",
+ "name": "plugins/doc_fragments/ec2.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "dd953f7e779b9962e76492c389142e03174e84a8115f53e56628e2af9e66b818",
+ "chksum_sha256": "683daff2b7de94f68d574e05ab3d5405a9d9fc672910f412b104cb326f648c11",
+ "format": 1
+ },
+ {
+ "name": "plugins/lookup",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/utils/shippable/units.sh",
+ "name": "plugins/lookup/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f7cb5eb0d65c282c5adfea998108add25fb65cf613dbf32e08c815b21a6bc891",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/utils/shippable/check_matrix.py",
+ "name": "plugins/lookup/aws_ssm.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2d356ce0e8072dfc57771c71c5a3f7b37945b2bf198f764eebb5d98f069f094a",
+ "chksum_sha256": "3ff7f4dff5009019eb3e887d2e098af57f970ccc7a32a5f8c79335ee12a86146",
"format": 1
},
{
- "name": "tests/utils/shippable/sanity.sh",
+ "name": "plugins/lookup/aws_secret.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d978d5d0ac8b6e266d493fc37b2ac6c153bdbbef64de0de049a43d680778a81a",
+ "chksum_sha256": "2b213c90a52012a023b4ca6214e210f68e7556295c91295e16cfe32765d570a8",
"format": 1
},
{
- "name": "tests/unit",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/lookup/aws_service_ip_ranges.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "65895337555024b1b052037fe4128928673527257e796a9df573a674ac03d924",
"format": 1
},
{
- "name": "tests/unit/compat",
+ "name": "plugins/lookup/aws_account_attribute.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2c293bd56db842a3c28e96a4c9302bf857fcafea22400864a068573e5f7bffca",
+ "format": 1
+ },
+ {
+ "name": "plugins/callback",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/compat/__init__.py",
+ "name": "plugins/callback/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/unit/compat/unittest.py",
+ "name": "plugins/callback/aws_resource_actions.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5401a046e5ce71fa19b6d905abd0f9bdf816c0c635f7bdda6730b3ef06e67096",
+ "chksum_sha256": "fb17984f9f244aba88f721c2df47ac6820e83992cde662e04bf4f2eab1a60629",
"format": 1
},
{
- "name": "tests/unit/compat/mock.py",
+ "name": "plugins/action",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "plugins/action/aws_s3.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0af958450cf6de3fbafe94b1111eae8ba5a8dbe1d785ffbb9df81f26e4946d99",
+ "chksum_sha256": "348e233ca01687aa88c78606f8721f8be738f26163860cfa14dfa80eb10673a7",
"format": 1
},
{
- "name": "tests/unit/compat/builtins.py",
+ "name": "plugins/action/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7163336aa20ba9db9643835a38c25097c8a01d558ca40869b2b4c82af25a009c",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/unit/mock",
+ "name": "plugins/module_utils",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/mock/loader.py",
+ "name": "plugins/module_utils/acm.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "cfe3480f0eae6d3723ee62d01d00a0e9f58fcdc082ea1d8e4836157c56d4fa95",
+ "chksum_sha256": "73614c7c2d701bafb3ae46f4c819cf4f3888efa92b39cee71e619beba24f446d",
"format": 1
},
{
- "name": "tests/unit/mock/vault_helper.py",
+ "name": "plugins/module_utils/waiters.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4535613601c419f7d20f0c21e638dabccf69b4a7fac99d5f6f9b81d1519dafd6",
+ "chksum_sha256": "18efe3f78acc185e49eb92d4e4150a0090ec6d11e0668a6720410562bfe52d91",
"format": 1
},
{
- "name": "tests/unit/mock/path.py",
+ "name": "plugins/module_utils/policy.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c44806a59e879ac95330d058f5ea6177d0db856f6e8d222f2ac70e9df31e5e12",
+ "chksum_sha256": "7e82eae2df8a56b52449688888c09c8f9a5f70b1d0145bc820adb5bf6343bdb0",
"format": 1
},
{
- "name": "tests/unit/mock/__init__.py",
+ "name": "plugins/module_utils/iam.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "0d0f188188df4a059e3f855c2cab85a6eb5d6e908c8e8c18410b5853b48d1f86",
"format": 1
},
{
- "name": "tests/unit/mock/procenv.py",
+ "name": "plugins/module_utils/rds.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3d53f1c9e04f808df10e62a3eddb460cc8251d03a2f89c0cbd907d09b5c785d9",
+ "chksum_sha256": "6664907704c27f96f29cfc5b92407a8491be46f56eeda8459f774ba6883b8b70",
"format": 1
},
{
- "name": "tests/unit/mock/yaml_helper.py",
+ "name": "plugins/module_utils/cloud.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fada9f3506c951e21c60c2a0e68d3cdf3cadd71c8858b2d14a55c4b778f10983",
+ "chksum_sha256": "d50313f16fd96e2376dd0e9d9cc88f258541a2461f61a5627777ee58ad363957",
"format": 1
},
{
- "name": "tests/unit/utils",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/module_utils/waf.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "643bc1da71db0c08de7e13b645330355961c20f72b021e8e9c1e682c5530674a",
"format": 1
},
{
- "name": "tests/unit/utils/__init__.py",
+ "name": "plugins/module_utils/elb_utils.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "fdb692e5d99229f7bbbf7b7a8db6069c83a149d441124f013fad973b51fa036f",
"format": 1
},
{
- "name": "tests/unit/utils/amazon_placebo_fixtures.py",
+ "name": "plugins/module_utils/tagging.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "64958b54e3404669d340a120f6b2c7ae79f323e6c930289514eba4569d1586c1",
+ "chksum_sha256": "82f22084aaa80695a7f895980eb4752e588414c5b73787cbaea599ef6d6059ca",
"format": 1
},
{
- "name": "tests/unit/requirements.txt",
+ "name": "plugins/module_utils/urls.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "653eef28acdaac13c5b0d35736ad45adde454ecfce69ca4d887bb4783c9052af",
+ "chksum_sha256": "089b532522cfff202a7da5cedb5d6e2e46932bbd273094b6c69504b4b4e21262",
"format": 1
},
{
- "name": "tests/unit/__init__.py",
+ "name": "plugins/module_utils/batch.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "1ee897b11875f13f8dd12d245d0c4d680a95830886a489990137d0af2fb5d0db",
"format": 1
},
{
- "name": "tests/unit/plugins",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/module_utils/direct_connect.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "57e6f1bbf32388e3864419baa48bc57d509f56dccbb8bbec0787bcdc4c54dcb6",
"format": 1
},
{
- "name": "tests/unit/plugins/inventory",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/module_utils/core.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4153b7978239a3d1a92a1429f13decadb8c73830495a3b1b9067059ff3c284cb",
"format": 1
},
{
- "name": "tests/unit/plugins/inventory/__init__.py",
+ "name": "plugins/module_utils/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
"chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/unit/plugins/inventory/test_aws_ec2.py",
+ "name": "plugins/module_utils/elbv2.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "54d45f41ffc934e2352cd4f4d91f34a9abda9e5da9f1bbef3319e001f35878b1",
+ "chksum_sha256": "35e920ec198c3f398ec331a6c112246404bfdcd697c2040d9ba70e8b944582d3",
"format": 1
},
{
- "name": "tests/unit/plugins/__init__.py",
+ "name": "plugins/module_utils/cloudfront_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
- "format": 1
- },
- {
- "name": "tests/unit/plugins/lookup",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "99f80f9bf04ccd268a4819e93121579e43eeea0c08240a7a5b8ab0f91a9bda26",
"format": 1
},
{
- "name": "tests/unit/plugins/lookup/test_aws_ssm.py",
+ "name": "plugins/module_utils/ec2.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "86c742ca6e162e77e984356eb5be049ada1eeb47e5bbf281b4d86d517304d0e7",
+ "chksum_sha256": "1dbb76ff09fef7c53da826f541e43f88a2b4a0c994a14cb6d9741435792b78b3",
"format": 1
},
{
- "name": "tests/unit/plugins/lookup/__init__.py",
+ "name": "plugins/module_utils/s3.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "7dc7a7e74319083f771e70cdab26f97f2e96f340f41804ab4b7b0c2d95000db5",
"format": 1
},
{
- "name": "tests/unit/plugins/lookup/fixtures",
+ "name": "plugins/modules",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/plugins/lookup/fixtures/avi.json",
+ "name": "plugins/modules/ec2_vpc_subnet.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3739de410d134591fada61f62053bfab6fcbd5c80fe2267faa7971f9fe36570d",
+ "chksum_sha256": "3750e6b8ecd98d551848ef7fbb825efef9f56bdee67892719d8b3cf3459ee080",
"format": 1
},
{
- "name": "tests/unit/plugins/lookup/test_aws_secret.py",
+ "name": "plugins/modules/ec2_vpc_net_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "cc4e751181bd9dd42c123600f9b54372a96d6848ccb2e4c1f73d30cb1cfc0278",
+ "chksum_sha256": "41dc709ba81c1b56d1756e1de3461bffe12a7e005c62080b4a690a9da249ac09",
"format": 1
},
{
- "name": "tests/unit/plugins/modules",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/ec2_instance_facts.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "16262b23adc4f468c68bc2888c2668c3addd41b9ca862e26a9933b7017bcb6b8",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/__init__.py",
+ "name": "plugins/modules/cloudformation.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "76a523d2bb54a0b4a7c13d84b5e17f54b6ca39e3d4d8aaa01a88028abb3faf6a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/ec2_eni_info.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "eb980c8274f365b8cc62aa8669c720aafa16b89ef900e827df70eba997c42441",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/ec2_ami_facts.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7f02e460dd88d486bbbcb4c3a79b459299b930cf5ca9a6d6440aaa9cde0b2cd3",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/a.pem",
+ "name": "plugins/modules/aws_caller_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ef0266ee8cf74a85694bf3ce1495260913b5ca07189b0891bbfc8d4c25b374ea",
+ "chksum_sha256": "0da5daa6643c6bc513736852ac0d9811e11e1e33422ea7b6df81a95c3d53ffc6",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.0.cert",
+ "name": "plugins/modules/ec2_group_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "428e852fcbe67bbdbb2d36fb35bef4b2fb22808587212e19f3225206ceb21c12",
+ "chksum_sha256": "a524b178e690da7f227ee309d2b51c43d9315568b02b3b5ad7c8b83f002ab91a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.1.cert",
+ "name": "plugins/modules/ec2_snapshot.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0325c21e49992708528ebf66162c18e1e1eb2a0837c6d802b1cf3bde73ec06bc",
+ "chksum_sha256": "0bdb0d161a2ed0b0259f7e3fb2ef61305599083d759f8d5a226a5268e358fa52",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/simple-chain-b.cert",
+ "name": "plugins/modules/ec2_vpc_nat_gateway_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9e4b01f50b09f45fcb7813e7d262a4e201786f0ecd76b45708abe55911b88fd2",
+ "chksum_sha256": "cd0f4398f4a07726b1392c804628d1538ef2a0a5538317b6cace0800976eea64",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/chain-4.cert",
+ "name": "plugins/modules/ec2_eni.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "37fb85917db1cd90b5881c8d3d3a9d51ae7c9b904020d0ffbf0734bcf11bb666",
+ "chksum_sha256": "37bf0b1b9d67df75543d21cbadbf5a2c46ddf0f2142c4b85966fe64d0265fd9f",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/b.pem",
+ "name": "plugins/modules/aws_az_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2937cb7102c4d4902b09aada2731c1b0165e331dbfde9990644c4c3ee1544b21",
+ "chksum_sha256": "b542a76aee3c12031c57b918fae9b74e42a4ab6fb81752826b26c0af5b53dc21",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.2.cert",
+ "name": "plugins/modules/aws_s3.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d9e3dfae7a19d402a8de1a2b65fcc49c43ff489946e8ca9e96efa48783e26546",
+ "chksum_sha256": "0b021804ad8cf7419e2119740d49a3f6891838f624b5b48c44281cfa57cece66",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/simple-chain-a.cert",
+ "name": "plugins/modules/elb_classic_lb.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9e4b01f50b09f45fcb7813e7d262a4e201786f0ecd76b45708abe55911b88fd2",
+ "chksum_sha256": "e2e7ad38e3a56d8298eba432aa83a1f76b2d3895228f1a634c030887cac4bc65",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.3.cert",
+ "name": "plugins/modules/ec2_vpc_nat_gateway_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ef1018e15bb9fad1e7a4f15aa6191e80042fc7fc08ef4bec3e115d96a9924b98",
+ "chksum_sha256": "cd0f4398f4a07726b1392c804628d1538ef2a0a5538317b6cace0800976eea64",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.4.cert",
+ "name": "plugins/modules/ec2_instance.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4e15c84bcf1024f5bb0b2940844fdc4ed97ba90ef7991b513d1659b43a0e7783",
+ "chksum_sha256": "e4a09af4e8d90d380f2732a6aa08123c95d6e170a69ec6a96cfff4f7f8e7e808",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/__init__.py",
+ "name": "plugins/modules/ec2_vpc_route_table_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "b8e2b59ec8ea766d14aed9a5f7f3a486807c59d9c4aba69cc09786f54710a69c",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/fixtures/thezip.zip",
+ "name": "plugins/modules/cloudformation_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "02a319fb1a6d33b682f555eefb98f2a75b2a3be363e1614c373431b4f30fda7f",
+ "chksum_sha256": "2a95e1aeecbd81e2adc66dacb06e89aaa4385bd09595df6edd85a9566c69df33",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/utils.py",
+ "name": "plugins/modules/ec2_snapshot_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b68f9ac9c8f02f1e87b0294a125adb102c718f6e3e5f856ec3401b2b890003cf",
+ "chksum_sha256": "eda83b3f26d3d781a1f34ddcaf04b6916b84ddfd6fad6ecb0b3c227ff2c8e95a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/ec2_vpc_endpoint.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2848c7311a9ffb2af520d869341eedc50a09f79e2aba5768e07588f9618ef89b",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/__init__.py",
+ "name": "plugins/modules/ec2_vpc_dhcp_option.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "03e28f9153cedfac3e73c4eb588efd9ba18159567ff1382010d43089505e2040",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/.gitkeep",
+ "name": "plugins/modules/ec2_vpc_subnet_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "452f2c66fe340caf8dc4f008e869132432d0605412e0bfaa744353fcc311dbf5",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/ec2_spot_instance_info.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "3dc38d461170a22c0da91b42a4083052fedb84964ad7444d81b367a4e78e472a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/invalid_template_json",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/ec2_ami_info.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7f02e460dd88d486bbbcb4c3a79b459299b930cf5ca9a6d6440aaa9cde0b2cd3",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/invalid_template_json/cloudformation.CreateStack_1.json",
+ "name": "plugins/modules/ec2_spot_instance.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1cb3d77662d35b7703f65278ffdca78e6eb520e96fb3807d39ea3aa02086c1b7",
+ "chksum_sha256": "287196b9444ea44a7a77f4e3594a9513e74ae69bbe3531b9e4145e159b29cedd",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/__init__.py",
+ "name": "plugins/modules/aws_az_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "b542a76aee3c12031c57b918fae9b74e42a4ab6fb81752826b26c0af5b53dc21",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/s3_bucket.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "269b69731bf81ac8b5f14910d10729fc382ff3ad6e14b29864f304536ad0c48c",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.CreateStack_1.json",
+ "name": "plugins/modules/ec2_group_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2864dab59c7432ad2ae594e121ee581cface7b130025cd88d0cb4aead4215168",
+ "chksum_sha256": "a524b178e690da7f227ee309d2b51c43d9315568b02b3b5ad7c8b83f002ab91a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStackEvents_1.json",
+ "name": "plugins/modules/ec2_vpc_dhcp_option_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3d1711eba6a7c18f0ed7e00a1602dcd0dde519205fe6afc42446e1f222b9fe48",
+ "chksum_sha256": "81cb27195f91c1af7dde99af40dade9bc4f6e8faa98a946a2de87f10c1ecee8d",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStackEvents_2.json",
+ "name": "plugins/modules/ec2_vpc_dhcp_option_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "026ca2db13f88bfb4b469d5cd3c2ad5cf6635305fdbbab11e9d5d1d3330b26c2",
+ "chksum_sha256": "81cb27195f91c1af7dde99af40dade9bc4f6e8faa98a946a2de87f10c1ecee8d",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStacks_2.json",
+ "name": "plugins/modules/ec2_vpc_nat_gateway.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6acc11fdfc1929b45d589d8c77c2f9fae80d48e840d0e9cf630e362d6b288d4a",
+ "chksum_sha256": "c906b27dd8ddb21f74cb714566e2c9cb0167da0fee99b4393c1b9e0dab1612a1",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DeleteStack_1.json",
+ "name": "plugins/modules/ec2_key.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c09d7d26c96cb5b734e0198b88b00a13fc0d54d65b444278497c17c0f877fa29",
+ "chksum_sha256": "28aee25a9e9c47a7eb1d38fb88e6cf11693cd1810d08850c8e32778589507703",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStacks_1.json",
+ "name": "plugins/modules/ec2_snapshot_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b8f4dc01c750d860f317a98f598bf3acd7edfbc970054b2793013dfcad61c82f",
+ "chksum_sha256": "eda83b3f26d3d781a1f34ddcaf04b6916b84ddfd6fad6ecb0b3c227ff2c8e95a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/ec2_vol_info.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b9b5342e3514c193858387ba5e5cab8d6c0ef8fff1f4e6d7fc3af6a2cd066182",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_7.json",
+ "name": "plugins/modules/ec2_tag.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d3bec7dd62d084a3d115ea7f05a34052625b80d56839022b9ebcee2583053412",
+ "chksum_sha256": "b6d3bf3e17e0d383684990e3d58de7104920dc16ae17e59a83c555af294ced7a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_6.json",
+ "name": "plugins/modules/ec2_vpc_net_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fa726d3ba3ac078180b857893f2c5aec60526a8d60323a8cc06121a4bacdf982",
+ "chksum_sha256": "41dc709ba81c1b56d1756e1de3461bffe12a7e005c62080b4a690a9da249ac09",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_4.json",
+ "name": "plugins/modules/ec2_vpc_subnet_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "148d631b9e7bf07824a845880565c98a102dd0864a40328320db40f545ee7834",
+ "chksum_sha256": "452f2c66fe340caf8dc4f008e869132432d0605412e0bfaa744353fcc311dbf5",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.CreateStack_1.json",
+ "name": "plugins/modules/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4b83e42429a361b2be7b2524340d20264b43f5b0e4cb44fe5bafc3670e9f9d03",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_1.json",
+ "name": "plugins/modules/ec2_group.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "bf9e2fefb8c13c2b5040c8b502f6aa799f6da6b69c1a8e48e4e870536222df8b",
+ "chksum_sha256": "6a533b53ae8aa5ad5ebce1b4a97371a73e37e744d891090536f4c058397c4dd0",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_5.json",
+ "name": "plugins/modules/aws_caller_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e6f4a74e04f58505d1132c6981fffc1f24e79cbad86c69883677b3cb1703df5d",
+ "chksum_sha256": "0da5daa6643c6bc513736852ac0d9811e11e1e33422ea7b6df81a95c3d53ffc6",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_4.json",
+ "name": "plugins/modules/ec2_vpc_net.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7edd15131c553f2bff19840a66cd2498cf98c4f93bd8164a51ab3eb81a619ba9",
+ "chksum_sha256": "28b602467a594008109665c1a8f6365af589daa4adbe1f68d77794d192188395",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_2.json",
+ "name": "plugins/modules/ec2_vpc_igw_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f4d8ae33d4fe9f0aaa4c6c744174b1ad849d5881154fc8a5eb32fd8ee07566e0",
+ "chksum_sha256": "91b2fad2e82cbf7733bf9ffbffa3915773e9bf0cdbdc520813c1300b8bf9047a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_6.json",
+ "name": "plugins/modules/ec2_vpc_endpoint_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6f57b0469a084bb8891bdd14610b2dba1ef3baaab8436cff1413065e276012db",
+ "chksum_sha256": "97fdf37691709a64a19d15f8678c3b18fca75c9df7a0bbd9a4a1e6a0baa343eb",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_3.json",
+ "name": "plugins/modules/ec2_vpc_endpoint_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d75b82445e89303b2b4af1ef3161d4e315c6d02014c6df00165a9c526fc9bc56",
+ "chksum_sha256": "97fdf37691709a64a19d15f8678c3b18fca75c9df7a0bbd9a4a1e6a0baa343eb",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_2.json",
+ "name": "plugins/modules/ec2_vpc_endpoint_service_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e53c53e08397b8fca0f8e6a69a5254bb092b4f403f0fec0d9bff4352c3cc1192",
+ "chksum_sha256": "0e69676bf667a2972f3d1970982d86f73d67edfab6fae479b7552396258fe044",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_7.json",
+ "name": "plugins/modules/ec2_instance_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ead6f1c137dfc1628237502c6a955d3338770ff85f1715027f022e7773ed9992",
+ "chksum_sha256": "16262b23adc4f468c68bc2888c2668c3addd41b9ca862e26a9933b7017bcb6b8",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DeleteStack_1.json",
+ "name": "plugins/modules/ec2_vpc_route_table_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1659b6d17d4004dbeba28d635a752c4601c08c0f99a0d8c10f18487e0a215d8e",
+ "chksum_sha256": "b8e2b59ec8ea766d14aed9a5f7f3a486807c59d9c4aba69cc09786f54710a69c",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_3.json",
+ "name": "plugins/modules/ec2_metadata_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "05ac9850aa91e5ed4753d901e9bf0641c08c7be9148b633681cea76c94747fc8",
+ "chksum_sha256": "ad9962ce73e3d20d429327cd43b941c476358dbf2cf255aa7c4c4d139ec43659",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_1.json",
+ "name": "plugins/modules/ec2_vpc_route_table.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "41999395cf5a12be8eccc991675c44fe12b20433ed7cc7ca541f568b377c7a33",
+ "chksum_sha256": "64de81e214a25f171d3de6299ef89d1267532af44e5910b2451d4ba9d4e7ae5b",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_5.json",
+ "name": "plugins/modules/ec2_vpc_igw.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c9ab3c3b4d2e19ca6f764f2e8288dea1e52157dc1d49a319717bd65a3cc770e1",
+ "chksum_sha256": "535fc700027f1fc283525accc114cfc07a76ef3f167252b3576ffa90b1bee97c",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "plugins/modules/cloudformation_info.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2a95e1aeecbd81e2adc66dacb06e89aaa4385bd09595df6edd85a9566c69df33",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.CreateStack_1.json",
+ "name": "plugins/modules/ec2_vpc_igw_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "985c5ada32ac440bc971b553e75cb8516c52b9e78b50e6750d4d92ab2e4a9634",
+ "chksum_sha256": "91b2fad2e82cbf7733bf9ffbffa3915773e9bf0cdbdc520813c1300b8bf9047a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStackEvents_1.json",
+ "name": "plugins/modules/ec2_tag_info.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0e2832a3c70031ba07c44b0f8b291a04251052c22f764110bf0cd034d406bfbd",
+ "chksum_sha256": "20cdef74c1710b5f6d72d93768b2c683a281588fea36c4df389386de45aea7bb",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStackEvents_2.json",
+ "name": "plugins/modules/ec2_vol_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fe29fabc6f34c58976b23132558a2024af53e655f825cd8e5d1b2f39cc89ddcd",
+ "chksum_sha256": "b9b5342e3514c193858387ba5e5cab8d6c0ef8fff1f4e6d7fc3af6a2cd066182",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStackEvents_3.json",
+ "name": "plugins/modules/ec2.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "dfc55200a0f4d01d94845448b7c67f175cdf56e49df4bf9305525e7ffe543c64",
+ "chksum_sha256": "4a10e2de0c286260875e80575c7ebf22262960e528aae72fe5e6dd01a605126a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStacks_2.json",
+ "name": "plugins/modules/ec2_ami.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6d677c34f0715af2049abef7a479d1362760a0c089ff741d9ac0beed56849251",
+ "chksum_sha256": "04e7c0ab65637f1b5d332b0ce39b78a53d1c2ccb0654245bc3da1de7a8a4d724",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DeleteStack_1.json",
+ "name": "plugins/modules/ec2_eni_facts.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d1a0160fbde4f68c768aaf73182e2369a95721f2bb2e7ab5e9ee42016747dfa7",
+ "chksum_sha256": "eb980c8274f365b8cc62aa8669c720aafa16b89ef900e827df70eba997c42441",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStacks_3.json",
+ "name": "plugins/modules/ec2_vol.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "afe46889f0ec4537b13694f164343440b1fcb0334c539a5a7ec895d36fcf7953",
+ "chksum_sha256": "851c78e4662ba35f2af51926351e96db8b3093ff517e46daee80593451cf76bd",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStacks_1.json",
+ "name": "plugins/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "91a1e065a4854be515095aba447d6a011bb3bac6f8d5b0e3a9081f74ef873096",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack",
+ "name": "plugins/inventory",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack/cloudformation.DescribeStackEvents_1.json",
+ "name": "plugins/inventory/aws_ec2.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c585064211547fe7d0e560cabf12512ee49ca2bbc8622c3a615333aec1eb3dbb",
+ "chksum_sha256": "ccda00ddda1e5d90fc4683d77d34bae56f6f8ca78ee31b3666512c8f558468a2",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack/cloudformation.DescribeStackEvents_2.json",
+ "name": "plugins/inventory/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a58899e48f620454d1a1c59a261dec5f527970ae4b68f60a2e7bccef29ab5df8",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack/cloudformation.DescribeStacks_1.json",
+ "name": "plugins/inventory/aws_rds.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "15ea45877e966ada21f276359194aea32061bbb5fbf9269782518ff9c94fecc7",
+ "chksum_sha256": "3a3b9c84b1a5cd807a9b80b89c3933accb5be4657d34cd260d2304b4dd965146",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/get_nonexistent_stack",
+ "name": "CHANGELOG.rst",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "485f40e437ff81d1f2ee7fd3a9dd2ebe971cf0bd34009d3b3b998363b238a188",
+ "format": 1
+ },
+ {
+ "name": ".github",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/get_nonexistent_stack/cloudformation.DescribeStacks_1.json",
+ "name": ".github/settings.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "362d4ebe6fb6b538c0f74a6326a7697d6d129a77c3bfffedc24a5cac14b20e5a",
+ "chksum_sha256": "cb31353134cff7d91b546a03cc6fec7caaf0dba62079ea66776e2994461e6c7b",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete",
+ "name": ".github/BOTMETA.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "315b4d80327704d571204b7d4c71fa87148ed3b661beedd20eae9d5cdcf1bd2b",
+ "format": 1
+ },
+ {
+ "name": ".github/ISSUE_TEMPLATE",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_4.json",
+ "name": ".github/ISSUE_TEMPLATE/bug_report.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7163b875588f3e98a210fb5291149ec0f6c83213a533086ad8f37e2f9dfa012f",
+ "chksum_sha256": "eb7804f39d220f7aa9841b068e873ca751373cbe0a361c68c887c492aee9052d",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.CreateStack_1.json",
+ "name": ".github/ISSUE_TEMPLATE/ci_report.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "69ca28e411219d5afe76ad868d648f072fbbd2047223aed04f51c451a901dcc7",
+ "chksum_sha256": "9a0d3d78e4f98fd54f3e11c603d039cd4b42619bf4b077ae13ee8ec9bb51240b",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_1.json",
+ "name": ".github/ISSUE_TEMPLATE/config.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d18b0ee2d7aab11783d7ddca1bd7b822775a2e87286cae87a8bb37a25c8dbd22",
+ "chksum_sha256": "2e5f08c57601d637ec507daec616f993993d16f51892ca62214932b4fad0dcd9",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_5.json",
+ "name": ".github/ISSUE_TEMPLATE/documentation_report.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b829b3d4436f35e812bdb84da99c1d31a1c9b8be476a4b1ecab3587f7ee0f6e9",
+ "chksum_sha256": "931b2c7f9865f5e3f9ae992daea9d2957290bd2ec63ab60f9825886091a0847e",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_4.json",
+ "name": ".github/ISSUE_TEMPLATE/feature_request.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "18bbc5347bdc51e636840dda8020e4fe198d144d2d7bdfb5b800fbbf9521b551",
+ "chksum_sha256": "ee94dc240c8dffe2a54a9a2ae56c1db91912b71f25445c92cb6f0fee3b484cac",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_2.json",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "450c976b530fffd60e39b84a237375f46fb82bb8d09ec77a38d5ac3b87c59e18",
+ "name": "docs",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_3.json",
+ "name": "docs/amazon.aws.ec2_vpc_net_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "cc3383f46239477355a56db754bdaa2185283e10cf6e9a7bfeb1813c4488afd3",
+ "chksum_sha256": "062cb7703b3af61f83366e37ffb30de1150c695f0f23133f98bd41df6a28fa42",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_2.json",
+ "name": "docs/amazon.aws.s3_bucket_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8ce2c0c2869f52248ea296808b1f10cf0ee3491c18784c9270c8bd55087a8250",
+ "chksum_sha256": "6d0870bdd8e748cfe2a66b5798b40034413a1079958143d635ec0b44802d62ed",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_3.json",
+ "name": "docs/amazon.aws.aws_s3_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f197d5a0c7cb66d160864e359e2a62c857af76c1d1b0530180ea2e35fdb20efe",
+ "chksum_sha256": "8bdabe960c1c594623a10f05790afb77a441d80ac511925ec1d4eac31002e398",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_1.json",
+ "name": "docs/amazon.aws.ec2_instance_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0cfa4607aa88d41fa7229383460169b9fc76c3bba6593f82d320f71c6c866325",
+ "chksum_sha256": "f64339afb52b330bea3fdaadfebc9514a53acb29f2c9776a23b6150b12e1d61e",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_5.json",
+ "name": "docs/amazon.aws.ec2_spot_instance_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3a2720e847e9e878deab88eaa919c0e14e97210f581138c6341ff97a85da1b38",
- "format": 1
- },
- {
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "9cf744c1a84c0e4122f448d806f94e33ed2596f5ae91bbbc617412bb5e902353",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_7.json",
+ "name": "docs/amazon.aws.aws_ssm_lookup.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "18a0d86b4fe1d679715ab099b8413d22e6a47ff960c876525ef3dd79e77d18f6",
+ "chksum_sha256": "d3acf340f40828ab0cfb685ece26a99c32cb4e24a7fa4b98bcf70655adc588f9",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_6.json",
+ "name": "docs/amazon.aws.cloudformation_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "938495a4a09e83ede7e4c3a282cb93b1de0dd10435e4f670c301ab4ab4bc63e6",
+ "chksum_sha256": "3f88c2a590934b4ea9dc2958271fda8c0da61624172de2f7e4437ba7e576928a",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_4.json",
+ "name": "docs/amazon.aws.ec2_vpc_endpoint_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "98875bdf3813bbeeb89c537778974a65f6365644de28431a35412753124848fa",
+ "chksum_sha256": "c0ebfb0d140bc53d09b1b248cf3805c7fb7b942627f1cf38413e4ca91c44f51f",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.CreateStack_1.json",
+ "name": "docs/amazon.aws.ec2_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "adf858c3c40416e8bc9273ea17d578448c6497841cd05ae48616f49d0a44d723",
+ "chksum_sha256": "04864e88974078e264e4e20abecdc0ba389967e606939e9c8996c1c0a5152390",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_1.json",
+ "name": "docs/amazon.aws.ec2_vpc_subnet_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "790f9822331226175639d3e8c2645cfca0152f1e0fe24c82ab715e499ea070ac",
+ "chksum_sha256": "fa7d69893af2e91bf9909cb83751dee7124e3b2994304d226b3692894820af09",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_5.json",
+ "name": "docs/amazon.aws.ec2_ami_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a016652b1a0138353843f04780fae13e60226a22c4093d200bc07c8a89d75d44",
+ "chksum_sha256": "fe023296b0a5ec7b542977791ed8c7cbdc67ed2a468495006a4b9f2798681222",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_4.json",
+ "name": "docs/amazon.aws.ec2_tag_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "be7d4745aa48792bbb544043808428f39dd75a1dd0f75d928d2e7626d22ed762",
+ "chksum_sha256": "1debed02209023170254006d5425ce40ba65a97378843373db480a244e22f61c",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_2.json",
+ "name": "docs/amazon.aws.ec2_snapshot_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "664f1be3c9bb37661bcccba1de32aa3de0fdb08edcf8b276d726acfecc29baad",
+ "chksum_sha256": "2dd386c75e2a87cdcbc6ecefff482fac9b1d0c79ad5fbac2bea22473b027f891",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_6.json",
+ "name": "docs/amazon.aws.aws_rds_inventory.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0306607b12dcf8490f9e5fdca401eb9bb39e3f5507327f87996b804c825a50c5",
+ "chksum_sha256": "5ddda65c6757e4076e073695ecee86d90b55478eb53663d27b9824b35bb147c4",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_3.json",
+ "name": "docs/amazon.aws.ec2_tag_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e8967f3d6bfc91be380066e1a70070f0a33a239c9548b02c44c92ad550741cdc",
+ "chksum_sha256": "40a5b391474fb00d1371ca89edc9705b861c1949d7fc2ddaebabd5f8f76c896c",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_2.json",
+ "name": "docs/amazon.aws.ec2_instance_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b3fcc844d47ebcb9a759b328b8b513245bc2f9e6feded2b42806301d878d7bff",
+ "chksum_sha256": "5c762a9316b440ae3deb3ac5cea89271b83175d0fee31a54b051af0c1b5548bb",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_7.json",
+ "name": "docs/amazon.aws.aws_secret_lookup.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "49426404c58cc23230c46a32d193591ee51bb270486618bb5f76bf8b1cd63d86",
+ "chksum_sha256": "4f58b0f40459e3379d47ee0bfab0e5f86d1eef694d9f575dc0aba6e51fb1c105",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DeleteStack_1.json",
+ "name": "docs/amazon.aws.ec2_vpc_route_table_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1659b6d17d4004dbeba28d635a752c4601c08c0f99a0d8c10f18487e0a215d8e",
+ "chksum_sha256": "6949a6c14dee4c95019da064cc9e4d8d598b519619b55a92c525d663f269b92f",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_3.json",
+ "name": "docs/amazon.aws.ec2_group_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0ab32bc29e7af611043dafd9d25ba246951cd826f96baafcae8d27d2432ae1d3",
+ "chksum_sha256": "a151186885efb3ac60e4df4546b2bc7f2778a1ddd33594d10e2bb1a108ac6c84",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_1.json",
+ "name": "docs/amazon.aws.ec2_snapshot_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ffa553afc86b6600a849bdc2baa7fff8a27b94019800ffe85e7edd0ea81ad000",
+ "chksum_sha256": "c9214e6c4f5efd749a941de264d9b6ebe537a20eff1b15158c8905948381e91d",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_5.json",
+ "name": "docs/amazon.aws.ec2_vpc_endpoint_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a628694b44084384a0e2dbe1800797ef96ea12de2c05c65f716f37d26a1a0006",
+ "chksum_sha256": "ee44535a927f2e817ff380af208be7bf62e754d845364faad9f64e658a69f4b4",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/test_cloudformation.py",
+ "name": "docs/amazon.aws.ec2_eni_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6e029bd5bebd2a0f9949d07342700f70d66a7effeb88e24711f31ed79c2e987c",
+ "chksum_sha256": "ad8f491e8abfb88a7c0e82bcfdf3ce175ad692204580ca8b72b93031cf2cfcbb",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/conftest.py",
+ "name": "docs/amazon.aws.ec2_spot_instance_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "909818cefd5093894a41494d1e43bd625538f57821375a564c52fe4219960967",
+ "chksum_sha256": "fadc139f0477574d1684d5df8dd28d77a2de1ff6b86f6a8f4508aa35b2307322",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/test_aws_s3.py",
+ "name": "docs/amazon.aws.ec2_vpc_subnet_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "54f9deff367de9af88b72f7107ba603e138cccce6c84cef87372ee50990e9943",
+ "chksum_sha256": "999b047a64789cdec5da96b323d44cc17f6eb12671b30826852e866889089866",
"format": 1
},
{
- "name": "tests/unit/plugins/modules/test_ec2_group.py",
+ "name": "docs/amazon.aws.aws_account_attribute_lookup.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b79c910f567ec1b8856cf68ed6723479428115fb97f4cc10333b0d78a6aa0332",
+ "chksum_sha256": "1fbdaba059aee1dc9aba21642293d37f70d5de6a1e8bc82c541909cdc9f848c5",
"format": 1
},
{
- "name": "tests/unit/module_utils",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "docs/amazon.aws.aws_service_ip_ranges_lookup.rst",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b431a411fa906e9795fdf150baf181aaaf692676e3ef3885bad48d3efc40e154",
"format": 1
},
{
- "name": "tests/unit/module_utils/test_ec2.py",
+ "name": "docs/amazon.aws.aws_ec2_inventory.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8836d0e6ac39d93a6c5bf4f831b27cc4fb82a2793a46dd4101e9b821e2039b00",
+ "chksum_sha256": "1d0561b82caa108f8d82a10138445ca544ec5d2e17e9214f674dd63b2b75ad54",
"format": 1
},
{
- "name": "tests/unit/module_utils/__init__.py",
+ "name": "docs/amazon.aws.ec2_group_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "b58701d44c918a14ddf17393f1c68b0822b6b174748251382ed99bf01c1bfc8e",
"format": 1
},
{
- "name": "tests/unit/module_utils/test_iam.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "2798ce8132ee7ec1e88799ddb6f393d4ac0f7d1b5a95a62872fe9b8fa29d5894",
+ "name": "docs/docsite",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/module_utils/ec2",
+ "name": "docs/docsite/rst",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/unit/module_utils/ec2/test_aws.py",
+ "name": "docs/docsite/rst/guide_aws.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b9d5d76b80943c248a28f3e193192aad22498d19868fd7b1fd6b6331f0984392",
+ "chksum_sha256": "0b35af8551a6c118f518e16248747d0bc2b1050332329614109c78d66c07d1fa",
"format": 1
},
{
- "name": "tests/unit/module_utils/ec2/__init__.py",
+ "name": "docs/docsite/extra-docs.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "e10844c04b762fc4bcc0e173b2365f6287582097a127da07a8f921afe2a93c3c",
"format": 1
},
{
- "name": "tests/unit/module_utils/ec2/test_compare_policies.py",
+ "name": "docs/amazon.aws.ec2_vpc_endpoint_service_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "da57576455084ccc71d4c22985bbc75c1668a94923b11ab5f608b36d27ed6920",
+ "chksum_sha256": "a19da74f2d07ff293a37f7a6ed6bd4e574345144fa9fd0e01256aa0016549fc1",
"format": 1
},
{
- "name": "tests/unit/module_utils/core",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "docs/amazon.aws.ec2_vpc_nat_gateway_info_module.rst",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ba5cd94fb65e4e54ffabb72025862d8946f25b0899f9756dfba835a8b0f59595",
"format": 1
},
{
- "name": "tests/unit/module_utils/core/test_scrub_none_parameters.py",
+ "name": "docs/amazon.aws.cloudformation_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "10b7e84eaa9920f6532e813c62f67533fd463abeca01061f2d6e9eb6bd68c395",
+ "chksum_sha256": "977c8bf39417e2b5d002a3ecd7f441629cb0cfb26e6ff736067499ae65511d53",
"format": 1
},
{
- "name": "tests/unit/module_utils/core/__init__.py",
+ "name": "docs/amazon.aws.ec2_vpc_net_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "09dfe4d5c3207adf248caee8fc46c38343665d0df3fe4a5f7d1cf0f1b635ea21",
"format": 1
},
{
- "name": "tests/unit/module_utils/core/test_normalize_boto3_result.py",
+ "name": "docs/amazon.aws.aws_az_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ed56ea6bbb5ffa50f13033f198a9d50690702c51361d3c9e2cfa2e37ec2de590",
+ "chksum_sha256": "95ae470d8da4cc491c3fb7b0df2301f9c11dfad6a0f358196f23b4d8b995ebe3",
"format": 1
},
{
- "name": "tests/unit/module_utils/core/ansible_aws_module",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "docs/amazon.aws.ec2_eni_info_module.rst",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "9dfea885142db3f385dc3544c6e7c399934859edb55521f0f922f990cff2c38f",
"format": 1
},
{
- "name": "tests/unit/module_utils/core/ansible_aws_module/test_fail_json_aws.py",
+ "name": "docs/amazon.aws.ec2_ami_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8a974b1c541a6808c56710cab0e272e808960d36b3968ec0088c837f0c6e5ea6",
+ "chksum_sha256": "857be842be45a3b5c3beb87eebec03735ea65443dec965a0f25546cff91ac567",
"format": 1
},
{
- "name": "tests/unit/module_utils/core/test_is_boto3_error_message.py",
+ "name": "docs/amazon.aws.elb_classic_lb_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b18fb9bc2fcde61c82113fd3065663c75c1a4731d52aa196a6e710845bfe1c99",
+ "chksum_sha256": "5f8a6fba76110951158b1e09475a411a599e0e97452e6535f61a36729563f7dd",
"format": 1
},
{
- "name": "tests/unit/module_utils/core/test_is_boto3_error_code.py",
+ "name": "docs/amazon.aws.ec2_key_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4d1a5787e87b8256843d0e073cb1252a9ef0017a69cb9b769710da94cdab3b93",
+ "chksum_sha256": "8b1ed9640717d35f9328a31ba74f5b3773816c17eeaa80bf599a06965bb97825",
"format": 1
},
{
- "name": "tests/unit/module_utils/conftest.py",
+ "name": "docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2e722f74b02b3af62498536794cf19d8ecc9dcafa0fa06eb750a32f1fff7a7cc",
+ "chksum_sha256": "6115a51db90965722433936b6a6295a9aca9384407898464f8ad3bd19619f7b0",
"format": 1
},
{
- "name": "tests/unit/module_utils/test_elbv2.py",
+ "name": "docs/amazon.aws.ec2_vpc_dhcp_option_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b0ab941ff4416b05814d6b35d1f46e8fa8cc7c243129a308e338b9107522aa7f",
+ "chksum_sha256": "dc5c43d414ec517ec30dced67820d40edc1d6c357b8476d6e9bf7d5fbf69c32b",
"format": 1
},
{
- "name": "tests/requirements.yml",
+ "name": "docs/amazon.aws.ec2_vpc_nat_gateway_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8bde57f76898361f17cbc2cd56a489b8a0c92f6a2da4eb68db801b041bcad732",
- "format": 1
- },
- {
- "name": "tests/integration",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "47264399ad820d13964616083afa48f8506b1836f038231328cabcdb469dee7c",
"format": 1
},
{
- "name": "tests/integration/requirements.txt",
+ "name": "docs/amazon.aws.ec2_metadata_facts_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fe8e222150e8f462c17cb8616ad8de4e39f3be13c469bf3d629edd7c0d30d65a",
- "format": 1
- },
- {
- "name": "tests/integration/targets",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
- "format": 1
- },
- {
- "name": "tests/integration/targets/prepare_tests",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "4c12cb3a01e50aa8a748b3362397ccc1e2494b8c6d1a22dc8ce3b72883eedb4c",
"format": 1
},
{
- "name": "tests/integration/targets/prepare_tests/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "docs/amazon.aws.ec2_vpc_igw_module.rst",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2107cb3572981cf07794ae27ff85e7669f339b2eb492e759f44c5f92dab05e45",
"format": 1
},
{
- "name": "tests/integration/targets/prepare_tests/tasks/main.yml",
+ "name": "docs/amazon.aws.ec2_vol_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "b75ed393a5cf4f7a7aff20e3f6a5855362986bed39a8db60b59bb54102ed907c",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "docs/amazon.aws.ec2_vpc_igw_info_module.rst",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "bdb4f1ef7694aa6c2c9e69eb0936d3588c0f09d2bab5c2d6172869bd16bee1e7",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "docs/amazon.aws.aws_caller_info_module.rst",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ed80c0d4d78d38f0d4e1b4e3517cf0613c2352006a2ec841c28e139fc567cf0b",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/tasks/tests.yml",
+ "name": "docs/amazon.aws.ec2_vol_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "686f9be4c1cde9ef169c1bdc4ac1ad2c83a777e2de5160d685f3dff7f7f7665c",
+ "chksum_sha256": "762fa89a0400e1e87aff4c360e2879e107f45ce012aa1f9f0e462d2447243c37",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/tasks/main.yml",
+ "name": "docs/amazon.aws.ec2_vpc_route_table_info_module.rst",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6b8c7b9958683b814cc8d374f3cdfa890092477544e190f98dc6049e5551f443",
+ "chksum_sha256": "1c72740c7d72748ebfd3c9c06af131e3928634b58c61c9c6729e0eef66fc5390",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/meta",
+ "name": "meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/meta/main.yml",
+ "name": "meta/runtime.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f2281d88e947484d75d24198c279de09c5f72cad949ec6d052220723e8ad63e0",
+ "chksum_sha256": "017995fab25d2632c6ffb0962e97a1dc6fd32621320c6dbf9b9e1d731eb4f8db",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/main.yml",
+ "name": ".gitignore",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b3db779a67fc33ee5db96cb2c912a70c6c072bb7461baefed35890bc6ea25f1d",
+ "chksum_sha256": "5a00777ca107231dc822535458402764507be2cf2efa433ea184bb2163e07027",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/aliases",
+ "name": "README.md",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7d5ce8e9778ef05de53021d431c1a140e90329e6f324ecb5b373218455255e62",
- "format": 1
- },
- {
- "name": "tests/integration/targets/ec2_vol/defaults",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "164ffc931d7c79e3c28d972adf46251c6c33be3e2d80576992b416b999d11a58",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vol/defaults/main.yml",
+ "name": "CONTRIBUTING.md",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1712fb3e9db585017ca3a3c80209b1974957454860c6ad6cff4de5fb28fbb791",
+ "chksum_sha256": "dd9f096951dd8aaa4ba640e8a1c333edc7cc0a145132c70358dadcaaac68b81b",
"format": 1
},
{
- "name": "tests/integration/targets/lookup_aws_account_attribute",
+ "name": "changelogs",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/lookup_aws_account_attribute/tasks",
+ "name": "changelogs/fragments",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/lookup_aws_account_attribute/tasks/main.yaml",
+ "name": "changelogs/fragments/.keep",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6500060d4ee06642300066f277634203e32639982b32220c5d31e96d775a6cbd",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/integration/targets/lookup_aws_account_attribute/aliases",
+ "name": "changelogs/changelog.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9819a2eafef4b14f0c325412689a7b0cc4a3e2b364fb4af1304783caa1971c9b",
- "format": 1
- },
- {
- "name": "tests/integration/targets/module_utils_core",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "081fb73217319ef1f8655d6d091471029fbff84635e1321432a90ea32f1bf258",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/runme.sh",
+ "name": "changelogs/config.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0d48d5adc889ec75147bf7ed1200f2cd1cde582de74e2523b9687e0204167cb5",
+ "chksum_sha256": "a5108e9a705d8037b5e214c95ff2bba76e09c8ff4c391c144f1f6f7a5edb051f",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/inventory",
+ "name": "requirements.txt",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4514e38376fcaaeb52cb4841f3aeeb15370a01099c19e4f2ed6a5f287a49b89a",
+ "chksum_sha256": "c240ce72f51632096474e8fd558f8a61b47d74f06dd4a73e322e7770156f7cb7",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/templates",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "test-requirements.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ee7792a2a17ff1520001e9689a438f1c1938fcca25a1a482641b0faa2faa9feb",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/templates/boto_config.j2",
+ "name": "bindep.txt",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ba7335ce0c8b8a32fc82bf7522a0f93d69190ff9895f4804985d2c08b7b3fd37",
+ "chksum_sha256": "87c61ee29c6b14665943e7f7ffc4ce51c3e79e70b209659161b278bca45abb12",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/templates/session_credentials.yml.j2",
+ "name": "COPYING",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6104b125462eb5b6c5e5067e6c5b9041f0804c29755200fda62f0472a4a29f1e",
+ "chksum_sha256": "0ae0485a5bd37a63e63603596417e4eb0e653334fa6c7f932ca3a0e85d4af227",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/meta",
+ "name": "tests",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/meta/main.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
- "format": 1
- },
- {
- "name": "tests/integration/targets/module_utils_core/setup.yml",
+ "name": "tests/requirements.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d416d3ebcd9ea58c450a07ec98a78f42423bde3fdf2396971c8af836169e7b17",
+ "chksum_sha256": "3dc9e987478591bdd026a9d125617a6a915cb332a34c7850909b0f8c9a634ccc",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/main.yml",
+ "name": "tests/.gitignore",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "40fd2ac4ad62f120b0ab06ebc1b597f8df9a56b02772cff353ac457aa7cc6023",
+ "chksum_sha256": "e33e9227e6fb67d4bf8c2e9b095ed2d6d324684dadf237cf749467f92d14e0f4",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles",
+ "name": "tests/unit",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client",
+ "name": "tests/unit/plugins",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks",
+ "name": "tests/unit/plugins/lookup",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/profiles.yml",
+ "name": "tests/unit/plugins/lookup/test_aws_ssm.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "bbc4902462428729910b71ea6bd2fb11013fad58e998be8f9f2e4a86f97a8387",
+ "chksum_sha256": "ce163708fac063acdfa5bdbf2add10890984e9d6ebba4cae494907246e1ec855",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/credentials.yml",
+ "name": "tests/unit/plugins/lookup/test_aws_secret.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "90995fadd544d2ac3490121a30cd7414fdb89495231bdf16535a6b6c7d491638",
+ "chksum_sha256": "cc4e751181bd9dd42c123600f9b54372a96d6848ccb2e4c1f73d30cb1cfc0278",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/main.yml",
+ "name": "tests/unit/plugins/lookup/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b63ff3b3058da02396d2322c56e9fe7dd6ed282a247bcc841647ee7dab6e2127",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/ca_bundle.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "96f95ee62565f62141122c6ebf63bb25d472f88135703716f395ba64c8ed30d3",
+ "name": "tests/unit/plugins/lookup/fixtures",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/endpoints.yml",
+ "name": "tests/unit/plugins/lookup/fixtures/avi.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7c8d0f5147bcb991f8f393e55d775d1eb135b38e5704f53ef2944efa85fc8d8d",
+ "chksum_sha256": "3739de410d134591fada61f62053bfab6fcbd5c80fe2267faa7971f9fe36570d",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/meta",
+ "name": "tests/unit/plugins/modules",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/meta/main.yml",
+ "name": "tests/unit/plugins/modules/test_aws_s3.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
+ "chksum_sha256": "54f9deff367de9af88b72f7107ba603e138cccce6c84cef87372ee50990e9943",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/library",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/unit/plugins/modules/conftest.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "909818cefd5093894a41494d1e43bd625538f57821375a564c52fe4219960967",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/library/example_module.py",
+ "name": "tests/unit/plugins/modules/test_ec2_group.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6bcaf886524922e05fae62d6b7efefd576925c7148e948fe0b43ba41f14bdb47",
+ "chksum_sha256": "b79c910f567ec1b8856cf68ed6723479428115fb97f4cc10333b0d78a6aa0332",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/files",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/unit/plugins/modules/test_ec2_vpc_dhcp_option.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7191e49cc2abf2ae41072bb9ac132127e5834a752f643daf62d5b1b5f1538c53",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/files/amazonroot.pem",
+ "name": "tests/unit/plugins/modules/utils.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2c43952ee9e000ff2acc4e2ed0897c0a72ad5fa72c3d934e81741cbd54f05bd1",
+ "chksum_sha256": "b68f9ac9c8f02f1e87b0294a125adb102c718f6e3e5f856ec3401b2b890003cf",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/files/isrg-x1.pem",
+ "name": "tests/unit/plugins/modules/test_cloudformation.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "22b557a27055b33606b6559f37703928d3e4ad79f110b407d04986e1843543d1",
+ "chksum_sha256": "97af86714d76ce5cbe92a0101d6cf230ca7bf9ea92029b50e416c8cb0a976bdc",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_core/aliases",
+ "name": "tests/unit/plugins/modules/__init__.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9819a2eafef4b14f0c325412689a7b0cc4a3e2b364fb4af1304783caa1971c9b",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2",
+ "name": "tests/unit/plugins/modules/placebo_recordings",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/runme.sh",
+ "name": "tests/unit/plugins/modules/placebo_recordings/.gitkeep",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "77f0aee24edf123eeb5d2537829366a24d3dfdcda5dea2adb15b152c2c4ce88d",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates",
+ "name": "tests/unit/plugins/modules/placebo_recordings/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_use_contrib_script_keys.yml.j2",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "2cfe0d4af5b96e2dba042d80cb5de7dd62eb3eff3d1203486aadf76a9119c881",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_constructed.yml.j2",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStacks_1.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "052ed7f1c0238f101cd97972a209871be718417d34a08164a138a775646fa0bf",
+ "chksum_sha256": "b8f4dc01c750d860f317a98f598bf3acd7edfbc970054b2793013dfcad61c82f",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_cache.yml.j2",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStacks_2.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "212ab399c1d1f49130ad7755b99b619def84d42129f1a7d4e66ba24fcbd76c10",
+ "chksum_sha256": "6acc11fdfc1929b45d589d8c77c2f9fae80d48e840d0e9cf630e362d6b288d4a",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory.yml.j2",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DeleteStack_1.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7b9771837ad83a89cc76088bf8aa09be6f6d5e8c980f3ed4d72fb41fcb192af6",
+ "chksum_sha256": "c09d7d26c96cb5b734e0198b88b00a13fc0d54d65b444278497c17c0f877fa29",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_template.yml.j2",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStackEvents_2.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "afa5f9d7fc119409ecb2e6b5f45409ed738750034d7d96fc34580d64dd84b811",
+ "chksum_sha256": "026ca2db13f88bfb4b469d5cd3c2ad5cf6635305fdbbab11e9d5d1d3330b26c2",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_concatenation.yml.j2",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.DescribeStackEvents_1.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6777b9eaea5da24615ec813fcab4f75cfd6fb02870eff6021fad80ca104f505b",
+ "chksum_sha256": "3d1711eba6a7c18f0ed7e00a1602dcd0dde519205fe6afc42446e1f222b9fe48",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_include_or_exclude_filters.yml.j2",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_do_nothing/cloudformation.CreateStack_1.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "921fb290a6b74b12988cce58a07ca052396ebd9d8313e7affca103a32b27b022",
+ "chksum_sha256": "2864dab59c7432ad2ae594e121ee581cface7b130025cd88d0cb4aead4215168",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_constructed.yml",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_1.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ce4856fea65d5a040c2e875e16e7f9e6e57835dac495ec6828bdc509ee96b4b1",
+ "chksum_sha256": "0cfa4607aa88d41fa7229383460169b9fc76c3bba6593f82d320f71c6c866325",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory.yml",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_2.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3c1670564284272619dee9b374aa21278cc265c8db4d014d9a7add29fffd1d3f",
+ "chksum_sha256": "8ce2c0c2869f52248ea296808b1f10cf0ee3491c18784c9270c8bd55087a8250",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_refresh_inventory.yml",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_3.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9c4cfb83bbbbe73c2f0acfa8a3d694303b20cbc5d20bfab9b7dcdb1af6d267ac",
+ "chksum_sha256": "f197d5a0c7cb66d160864e359e2a62c857af76c1d1b0530180ea2e35fdb20efe",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/empty_inventory_config.yml",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_5.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "44a9f1885f675a872bebe0a1af0c40551688c8ccc1aeb700e74926a8edf69278",
+ "chksum_sha256": "3a2720e847e9e878deab88eaa919c0e14e97210f581138c6341ff97a85da1b38",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/create_environment_script.yml",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_3.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7fab6eae1dd6c175c638a8d241f65106938c898687eb864e8c2dd1c5b7761ed2",
+ "chksum_sha256": "cc3383f46239477355a56db754bdaa2185283e10cf6e9a7bfeb1813c4488afd3",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/setup.yml",
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_5.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b829b3d4436f35e812bdb84da99c1d31a1c9b8be476a4b1ecab3587f7ee0f6e9",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStacks_4.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7163b875588f3e98a210fb5291149ec0f6c83213a533086ad8f37e2f9dfa012f",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_4.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "18bbc5347bdc51e636840dda8020e4fe198d144d2d7bdfb5b800fbbf9521b551",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "450c976b530fffd60e39b84a237375f46fb82bb8d09ec77a38d5ac3b87c59e18",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.DescribeStackEvents_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d18b0ee2d7aab11783d7ddca1bd7b822775a2e87286cae87a8bb37a25c8dbd22",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_delete/cloudformation.CreateStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "69ca28e411219d5afe76ad868d648f072fbbd2047223aed04f51c451a901dcc7",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/invalid_template_json",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/invalid_template_json/cloudformation.CreateStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "1cb3d77662d35b7703f65278ffdca78e6eb520e96fb3807d39ea3aa02086c1b7",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack/cloudformation.DescribeStacks_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "15ea45877e966ada21f276359194aea32061bbb5fbf9269782518ff9c94fecc7",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack/cloudformation.DescribeStackEvents_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "a58899e48f620454d1a1c59a261dec5f527970ae4b68f60a2e7bccef29ab5df8",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/delete_nonexistent_stack/cloudformation.DescribeStackEvents_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "c585064211547fe7d0e560cabf12512ee49ca2bbc8622c3a615333aec1eb3dbb",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ffa553afc86b6600a849bdc2baa7fff8a27b94019800ffe85e7edd0ea81ad000",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_7.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "49426404c58cc23230c46a32d193591ee51bb270486618bb5f76bf8b1cd63d86",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b3fcc844d47ebcb9a759b328b8b513245bc2f9e6feded2b42806301d878d7bff",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_3.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "0ab32bc29e7af611043dafd9d25ba246951cd826f96baafcae8d27d2432ae1d3",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_7.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "18a0d86b4fe1d679715ab099b8413d22e6a47ff960c876525ef3dd79e77d18f6",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_5.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "a628694b44084384a0e2dbe1800797ef96ea12de2c05c65f716f37d26a1a0006",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_3.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e8967f3d6bfc91be380066e1a70070f0a33a239c9548b02c44c92ad550741cdc",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DeleteStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "1659b6d17d4004dbeba28d635a752c4601c08c0f99a0d8c10f18487e0a215d8e",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_5.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "a016652b1a0138353843f04780fae13e60226a22c4093d200bc07c8a89d75d44",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_6.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "0306607b12dcf8490f9e5fdca401eb9bb39e3f5507327f87996b804c825a50c5",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_4.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "98875bdf3813bbeeb89c537778974a65f6365644de28431a35412753124848fa",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_4.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "be7d4745aa48792bbb544043808428f39dd75a1dd0f75d928d2e7626d22ed762",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "664f1be3c9bb37661bcccba1de32aa3de0fdb08edcf8b276d726acfecc29baad",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStackEvents_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "790f9822331226175639d3e8c2645cfca0152f1e0fe24c82ab715e499ea070ac",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.DescribeStacks_6.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "938495a4a09e83ede7e4c3a282cb93b1de0dd10435e4f670c301ab4ab4bc63e6",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/basic_s3_stack/cloudformation.CreateStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "adf858c3c40416e8bc9273ea17d578448c6497841cd05ae48616f49d0a44d723",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStacks_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "91a1e065a4854be515095aba447d6a011bb3bac6f8d5b0e3a9081f74ef873096",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStacks_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "6d677c34f0715af2049abef7a479d1362760a0c089ff741d9ac0beed56849251",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStacks_3.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "afe46889f0ec4537b13694f164343440b1fcb0334c539a5a7ec895d36fcf7953",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStackEvents_3.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "dfc55200a0f4d01d94845448b7c67f175cdf56e49df4bf9305525e7ffe543c64",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DeleteStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d1a0160fbde4f68c768aaf73182e2369a95721f2bb2e7ab5e9ee42016747dfa7",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStackEvents_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "fe29fabc6f34c58976b23132558a2024af53e655f825cd8e5d1b2f39cc89ddcd",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.DescribeStackEvents_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "0e2832a3c70031ba07c44b0f8b291a04251052c22f764110bf0cd034d406bfbd",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/on_create_failure_rollback/cloudformation.CreateStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "985c5ada32ac440bc971b553e75cb8516c52b9e78b50e6750d4d92ab2e4a9634",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "41999395cf5a12be8eccc991675c44fe12b20433ed7cc7ca541f568b377c7a33",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_7.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ead6f1c137dfc1628237502c6a955d3338770ff85f1715027f022e7773ed9992",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e53c53e08397b8fca0f8e6a69a5254bb092b4f403f0fec0d9bff4352c3cc1192",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_3.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "05ac9850aa91e5ed4753d901e9bf0641c08c7be9148b633681cea76c94747fc8",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_7.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d3bec7dd62d084a3d115ea7f05a34052625b80d56839022b9ebcee2583053412",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_5.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "c9ab3c3b4d2e19ca6f764f2e8288dea1e52157dc1d49a319717bd65a3cc770e1",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_3.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d75b82445e89303b2b4af1ef3161d4e315c6d02014c6df00165a9c526fc9bc56",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DeleteStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "1659b6d17d4004dbeba28d635a752c4601c08c0f99a0d8c10f18487e0a215d8e",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_5.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e6f4a74e04f58505d1132c6981fffc1f24e79cbad86c69883677b3cb1703df5d",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_6.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "6f57b0469a084bb8891bdd14610b2dba1ef3baaab8436cff1413065e276012db",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_4.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "148d631b9e7bf07824a845880565c98a102dd0864a40328320db40f545ee7834",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_4.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7edd15131c553f2bff19840a66cd2498cf98c4f93bd8164a51ab3eb81a619ba9",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_2.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "f4d8ae33d4fe9f0aaa4c6c744174b1ad849d5881154fc8a5eb32fd8ee07566e0",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStackEvents_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "bf9e2fefb8c13c2b5040c8b502f6aa799f6da6b69c1a8e48e4e870536222df8b",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.DescribeStacks_6.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "fa726d3ba3ac078180b857893f2c5aec60526a8d60323a8cc06121a4bacdf982",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/client_request_token_s3_stack/cloudformation.CreateStack_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4b83e42429a361b2be7b2524340d20264b43f5b0e4cb44fe5bafc3670e9f9d03",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/get_nonexistent_stack",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/placebo_recordings/cloudformation/get_nonexistent_stack/cloudformation.DescribeStacks_1.json",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "362d4ebe6fb6b538c0f74a6326a7697d6d129a77c3bfffedc24a5cac14b20e5a",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/simple-chain-a.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "9e4b01f50b09f45fcb7813e7d262a4e201786f0ecd76b45708abe55911b88fd2",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.3.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ef1018e15bb9fad1e7a4f15aa6191e80042fc7fc08ef4bec3e115d96a9924b98",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/chain-4.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "37fb85917db1cd90b5881c8d3d3a9d51ae7c9b904020d0ffbf0734bcf11bb666",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/b.pem",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2937cb7102c4d4902b09aada2731c1b0165e331dbfde9990644c4c3ee1544b21",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.4.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4e15c84bcf1024f5bb0b2940844fdc4ed97ba90ef7991b513d1659b43a0e7783",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/simple-chain-b.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "9e4b01f50b09f45fcb7813e7d262a4e201786f0ecd76b45708abe55911b88fd2",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.1.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "0325c21e49992708528ebf66162c18e1e1eb2a0837c6d802b1cf3bde73ec06bc",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/a.pem",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ef0266ee8cf74a85694bf3ce1495260913b5ca07189b0891bbfc8d4c25b374ea",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.2.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d9e3dfae7a19d402a8de1a2b65fcc49c43ff489946e8ca9e96efa48783e26546",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/certs/chain-1.0.cert",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "428e852fcbe67bbdbb2d36fb35bef4b2fb22808587212e19f3225206ceb21c12",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/modules/fixtures/thezip.zip",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "02a319fb1a6d33b682f555eefb98f2a75b2a3be363e1614c373431b4f30fda7f",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/inventory",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/inventory/test_aws_ec2.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ad4afaa96f5340e3a56aa8f16cffd6449a1a7dbf5be431c8d35439daaa9452b9",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/plugins/inventory/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/ec2",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/ec2/test_aws.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b9d5d76b80943c248a28f3e193192aad22498d19868fd7b1fd6b6331f0984392",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/ec2/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/test_elbv2.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b0ab941ff4416b05814d6b35d1f46e8fa8cc7c243129a308e338b9107522aa7f",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/test_cloud.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ff53d7706da41fd3beed0f3795cda9f5f026bf741cfd4ce1d7af46a5624c9526",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/conftest.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2e722f74b02b3af62498536794cf19d8ecc9dcafa0fa06eb750a32f1fff7a7cc",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/test_is_boto3_error_message.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b18fb9bc2fcde61c82113fd3065663c75c1a4731d52aa196a6e710845bfe1c99",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/ansible_aws_module",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/ansible_aws_module/test_require_at_least.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "25456a3d4166a64e1d6a996dc7530b89e2d6bbb7a2e6f7529b1051047290c6ce",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/ansible_aws_module/test_fail_json_aws.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "8a974b1c541a6808c56710cab0e272e808960d36b3968ec0088c837f0c6e5ea6",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/ansible_aws_module/test_minimal_versions.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7928c879ee000b2dcebd89ddb41bb9ada0528c0e67e30897973915ddcb302cf3",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/test_normalize_boto3_result.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "9a04ea8e58cba6413faa8ab8196cca491aa85823cee9239ff3fa9f6e1f5bb107",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/test_is_boto3_error_code.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4d1a5787e87b8256843d0e073cb1252a9ef0017a69cb9b769710da94cdab3b93",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/core/test_scrub_none_parameters.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "5c1df1e791e92909fbd9ac9800d48eebb918e2f46925e6779e1021e3a2ca13ed",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/test_iam.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2798ce8132ee7ec1e88799ddb6f393d4ac0f7d1b5a95a62872fe9b8fa29d5894",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/policy",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/policy/test_compare_policies.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "1b88b6c71c4f39bf03144f1398443ff90a7ad08975eabe556ea1066cf28119f7",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/test_tagging.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "18ff27601c1bdd2846e18736722ebacb280e6c39d852c5d905717b4a09282912",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/test_s3.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "1f8e23a4e9ef4424c49c21752cac628d7b216b25781703f5e56b78fe27805a4a",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/module_utils/test_ec2.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ffee9c1be402c25941d65038df47d6b7523eb0ce8e39d32ebff3780a3902aa7c",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/mock",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/mock/loader.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "cfe3480f0eae6d3723ee62d01d00a0e9f58fcdc082ea1d8e4836157c56d4fa95",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/mock/path.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "c44806a59e879ac95330d058f5ea6177d0db856f6e8d222f2ac70e9df31e5e12",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/mock/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/mock/vault_helper.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4535613601c419f7d20f0c21e638dabccf69b4a7fac99d5f6f9b81d1519dafd6",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/mock/yaml_helper.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "fada9f3506c951e21c60c2a0e68d3cdf3cadd71c8858b2d14a55c4b778f10983",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/mock/procenv.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "3d53f1c9e04f808df10e62a3eddb460cc8251d03a2f89c0cbd907d09b5c785d9",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/requirements.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "41d8953ed10c920c59851c6c04a238abd19a54f102a84119d35d3c0538201a36",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/compat",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/compat/unittest.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "5401a046e5ce71fa19b6d905abd0f9bdf816c0c635f7bdda6730b3ef06e67096",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/compat/mock.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "0af958450cf6de3fbafe94b1111eae8ba5a8dbe1d785ffbb9df81f26e4946d99",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/compat/builtins.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7163336aa20ba9db9643835a38c25097c8a01d558ca40869b2b4c82af25a009c",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/compat/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/utils",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/unit/utils/amazon_placebo_fixtures.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "64958b54e3404669d340a120f6b2c7ae79f323e6c930289514eba4569d1586c1",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/utils/__init__.py",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/unit/constraints.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ec83a13627c064f0cb08f2b99b2d94dfad8a9ae4de0700fd2514e4765a4daa53",
+ "format": 1
+ },
+ {
+ "name": "tests/sanity",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/sanity/ignore-2.9.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d15d4ceba35abbe0bdfb264ccf5dba11db238487c00538068d821c6c4283e6ff",
+ "format": 1
+ },
+ {
+ "name": "tests/sanity/ignore-2.13.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e461655d727d930fc329830c6fd978b4362d570d77b23f0ec003709948b0fb74",
+ "format": 1
+ },
+ {
+ "name": "tests/sanity/ignore-2.10.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e461655d727d930fc329830c6fd978b4362d570d77b23f0ec003709948b0fb74",
+ "format": 1
+ },
+ {
+ "name": "tests/sanity/ignore-2.12.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e461655d727d930fc329830c6fd978b4362d570d77b23f0ec003709948b0fb74",
+ "format": 1
+ },
+ {
+ "name": "tests/sanity/ignore-2.11.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e461655d727d930fc329830c6fd978b4362d570d77b23f0ec003709948b0fb74",
+ "format": 1
+ },
+ {
+ "name": "tests/config.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "9d75ecdecbd76691b04ec2d5fcf9241a4366801e6a1e5db09785453cd429c862",
+ "format": 1
+ },
+ {
+ "name": "tests/integration",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/requirements.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "935dc8395bfcc1d1b1bde8d4991fb9559783ca8910badda0403eb9c7adec8a8d",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/constraints.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ec83a13627c064f0cb08f2b99b2d94dfad8a9ae4de0700fd2514e4765a4daa53",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key/tasks/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "bd192f760b9537f6aaddf3ead43c9f460526dfe4d5194e6178cce5093f04df53",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key/meta/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b2abccc927cfe77a04c442fe4cb680cef1163c594f5b1b91afbb7769b8d392cf",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "cf4fb8f0e9df1f65d20fb104f78d7eb3f5a36caaaefb05c0b3e1411e06fb6211",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_key/defaults/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "26aad832330421916caec9fe34ebc8d1bfa90d867b66ad745f4c12ebe84cc3c3",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2/tasks/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4751b15e3c848962d6a4448bbfe8a1d50219ea254431e3820b35eb2884288453",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2/meta/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "513205535169d91c98bcdbeab464e21787b6d9ae122c3eaebb1933591d615715",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "35e8c023d4aeff7399dd2b955fd50508d467e8d1f76f4612d48c5adba827f4b1",
+ "chksum_sha256": "3b058d15ce5c47251745cabedbcc0a34d06f2c9b8e19945a55c79b7abacaa40d",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/populate_cache.yml",
+ "name": "tests/integration/targets/aws_caller_info",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_caller_info/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_caller_info/tasks/main.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9aca47de38e5d7627fa0186adcc410920965b8fd2fbb8ede5cd2f19f7fad7198",
+ "chksum_sha256": "ee3b4355d2876a8648831474ce0b430c22c21035551ba77c0a125f4e2866a0e8",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_inventory_cache.yml",
+ "name": "tests/integration/targets/aws_caller_info/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f343af936f7105a81f703b55b5ed86bd3aab8b35ca6dc0672c5e5cca8dda3c16",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_invalid_aws_ec2_inventory_config.yml",
+ "name": "tests/integration/targets/ec2_group",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_group/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_group/tasks/numeric_protos.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4a57efe1ec08416ea90c3a80c03d0d3240a928933d5f46251acf97c9375b0a0e",
+ "chksum_sha256": "255ae824e4a300df540242151e8cc8035b06646af0761009dcd4b68dfd807579",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/tear_down.yml",
+ "name": "tests/integration/targets/ec2_group/tasks/multi_account.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fb7b069cb3653ca58ed6793a4e85a414ea6e9843fba4547a2768367fc4fbe7c3",
+ "chksum_sha256": "c5249cb541d660e400607344b991860732c733b0db1b02a471b9e1a531446a49",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/create_inventory_config.yml",
+ "name": "tests/integration/targets/ec2_group/tasks/ec2_classic.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "605a7f63f212908dfca5b8f52a01def2c2cb06500c4c4bc33f7356d6b4eb35d9",
+ "chksum_sha256": "a73d5c1b081c005988fef557e489304eaa5e3c336e75d4630930e316c64cf86c",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_concatenation.yml",
+ "name": "tests/integration/targets/ec2_group/tasks/egress_tests.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a72f35ed14d9205ceae2467a29b59906499bb97a3e2bdac56e8a19f05712f015",
+ "chksum_sha256": "45866ac187b9b2d08e62c7192534c1fb4324d1074c7ce0e99f23af7a4542725b",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_include_or_exclude_filters.yml",
+ "name": "tests/integration/targets/ec2_group/tasks/data_validation.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "34f232cc786fc22bc1a4afb5de14480a2a6730851ddb11ca369b931defb37ce5",
+ "chksum_sha256": "abdc617375c38e979faec977c117e0222b562dd57790967cd70285eae414a564",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_use_contrib_script_keys.yml",
+ "name": "tests/integration/targets/ec2_group/tasks/diff_mode.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "a2e7c53854f63ff9d694e53c71d918577b9db2813e898844c1e218fb717be1f9",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_group/tasks/ipv6_default_tests.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6bb647f2131f03bac5088a0b6f2f1bc3e14bf253189a504c93d6d50023e962e7",
+ "chksum_sha256": "f5e31c187ae076f3fc2f56b32526515b419319301459030a2dfccb9ed48c5887",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/test.aws_ec2.yml",
+ "name": "tests/integration/targets/ec2_group/tasks/multi_nested_target.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "c0e3bf023c0515b10dc60136e6764b152d38f2235df06d4c566d7140c8ebd47a",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_ec2/aliases",
+ "name": "tests/integration/targets/ec2_group/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "dfd08779f7170cd572bd30bd277562ab0ba7d35bf6f94c67bf194ffb04c77ea9",
+ "chksum_sha256": "047f7dae53a66f1a26403401e920416e6199d0d837816a21546d9309f0dea5e2",
"format": 1
},
{
- "name": "tests/integration/targets/setup_ec2",
+ "name": "tests/integration/targets/ec2_group/tasks/group_info.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "fe9c254d9db27cb08dd78f8a915affa46b8c29bd3910c8bf36fc6a6887f94dda",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_group/tasks/rule_group_create.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "3e6ba49498995b770799754f49f565d14c1b7d9ab50848beaccb65aa527100a6",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_group/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_ec2/tasks",
+ "name": "tests/integration/targets/ec2_group/meta/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_group/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ba142ee79a066d4ebf585c525ce375615d7e97ac889524c61bb30ff5d09ea24e",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_group/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_ec2/tasks/common.yml",
+ "name": "tests/integration/targets/ec2_group/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "386a6e24ee18825f0103bdbe6690b11da5beb2b2bdc53e842f50ce9b3c3fae69",
+ "chksum_sha256": "0f708cce7788b24124e9ac7b36c00ebefe26cc05ce69404f5a6538b09a928e0a",
"format": 1
},
{
- "name": "tests/integration/targets/setup_ec2/vars",
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_ec2/vars/main.yml",
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "03b695815f4b4833868e52a22f310bcac3be7e6e40e10ed1dcb2a7c9b13556e4",
+ "chksum_sha256": "4d1f5c0c649eb9d5e890f11221eab12970ab1b861cfd3602d761789066027df8",
"format": 1
},
{
- "name": "tests/integration/targets/setup_ec2/defaults",
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_ec2/defaults/main.yml",
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c11c67f4b66a693d3183caaee0abd97d2f02694e5998c0040b5f381dc68eeb3c",
+ "chksum_sha256": "1e8d632f9db7209967c5b2f6d734bede09841acc7b898dafc19f31c72cee9929",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group",
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "461593e4cb3cfe358d76f487c60090ca33644c2eb8a3ed51243932f74c86ed31",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks",
+ "name": "tests/integration/targets/ec2_vpc_endpoint_service_info/defaults/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "f124b988f1cf4242dfee3dd179059596c9074a8da01c9a45215d01b0d31b09ad",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/setup_ec2_facts",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/multi_account.yml",
+ "name": "tests/integration/targets/setup_ec2_facts/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/setup_ec2_facts/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c5249cb541d660e400607344b991860732c733b0db1b02a471b9e1a531446a49",
+ "chksum_sha256": "7d03d30a328d5758d05cec67692df82d906021b2d9823c6a67e8c3f51cd057d1",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/group_info.yml",
+ "name": "tests/integration/targets/setup_ec2_facts/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/setup_ec2_facts/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fe9c254d9db27cb08dd78f8a915affa46b8c29bd3910c8bf36fc6a6887f94dda",
+ "chksum_sha256": "9aaa58e2590a0bf6f419ff8277a158b68d87adcd5146808e1488528098ea4ec2",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/data_validation.yml",
+ "name": "tests/integration/targets/aws_s3",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/tasks/copy_object.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "abdc617375c38e979faec977c117e0222b562dd57790967cd70285eae414a564",
+ "chksum_sha256": "a431f4ba465fea093b19b34bb3a0c9cbe29756f8404ad6f7390a364f63229f37",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/ipv6_default_tests.yml",
+ "name": "tests/integration/targets/aws_s3/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f5e31c187ae076f3fc2f56b32526515b419319301459030a2dfccb9ed48c5887",
+ "chksum_sha256": "4ac87893c59e5a78d0a9422b64d35fc4891385d645c71847d9831a3cdddee13a",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/diff_mode.yml",
+ "name": "tests/integration/targets/aws_s3/tasks/delete_bucket.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a2e7c53854f63ff9d694e53c71d918577b9db2813e898844c1e218fb717be1f9",
+ "chksum_sha256": "01f3c58fdd7701a8ed0cb9e551ff8ebf3722220260dfcc63c4cb8d0a95f06e4f",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/main.yml",
+ "name": "tests/integration/targets/aws_s3/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "863ad64dc7b11c1587ac64e3492aef457e8a71b0e51b893bbf0896a3f4bc4171",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/rule_group_create.yml",
+ "name": "tests/integration/targets/aws_s3/files",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/files/test.png",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3e6ba49498995b770799754f49f565d14c1b7d9ab50848beaccb65aa527100a6",
+ "chksum_sha256": "bae277f309fbffab9590300ccc1e75805c9795bbcef69edfda22c5b2327e12ba",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/files/hello.txt",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "c98c24b677eff44860afea6f493bbaec5bb1c4cbb209c6fc2bbb47f66ff2ad31",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/templates",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/templates/put-template.txt.j2",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d8c9f0fc47011f7279babb0a29cb8f7812e4037c757d28e258d81ab7e82ca113",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/templates/policy.json.j2",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "a09d7c1dccacb2ea440736d61005e07bb469c9f04b153c4596bce1b586e14bd4",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/aws_s3/defaults/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "975882873a49fcfb84e767de7134c3c36e82da151d2e2cf1d2ae234cac300599",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/module_utils_ec2",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/module_utils_ec2/roles",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/multi_nested_target.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "c0e3bf023c0515b10dc60136e6764b152d38f2235df06d4c566d7140c8ebd47a",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/credential_tests.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/endpoints.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9a42387e9e3c5a0339b52d2d26e282a113c6017e8eaee5f29cf4ace3250f75b7",
+ "chksum_sha256": "421c1c2dc4df830214661bffd4f86ebfaa3a172ee7fa7e85a1ce6e933d1b9a72",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/numeric_protos.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/profiles.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "255ae824e4a300df540242151e8cc8035b06646af0761009dcd4b68dfd807579",
+ "chksum_sha256": "707e210e0248bb78f07974b075bd9d4f51431f2548e07512a8208f5ae33978a3",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/egress_tests.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "45866ac187b9b2d08e62c7192534c1fb4324d1074c7ce0e99f23af7a4542725b",
+ "chksum_sha256": "76d8276184fc168b0605e815cff209bf9c0ac2d44355897fa49df810f113db4c",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/tasks/ec2_classic.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/credentials.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a73d5c1b081c005988fef557e489304eaa5e3c336e75d4630930e316c64cf86c",
+ "chksum_sha256": "96175177c117770c4dcbae1f55c2ae806a20c53932c55cf84b11a92024e83b7c",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/meta",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/library",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/meta/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/library/example_module.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "257a81fed0b3e0550a69f3ca5b193b5e5c368cbc66ea71871fc43295f87b561a",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/aliases",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "232e1f5f95608b31744b51bd7546eea3404c39d8494dff7cca0d29020dc1b9cf",
+ "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/defaults",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/files",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_group/defaults/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/files/isrg-x1.pem",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0f708cce7788b24124e9ac7b36c00ebefe26cc05ce69404f5a6538b09a928e0a",
+ "chksum_sha256": "22b557a27055b33606b6559f37703928d3e4ad79f110b407d04986e1843543d1",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net",
+ "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/files/amazonroot.pem",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2c43952ee9e000ff2acc4e2ed0897c0a72ad5fa72c3d934e81741cbd54f05bd1",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net/tasks",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net/tasks/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/endpoints.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ff19b83ede3e2c9210e830f515b86c52f8efa3c3f5c2a58ca14e6f7682b24e56",
+ "chksum_sha256": "ab5dc8cea9fe6409d9e3b06981d1099b8f3fe5095cc456a4122409fcd375e9c4",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/profiles.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "707e210e0248bb78f07974b075bd9d4f51431f2548e07512a8208f5ae33978a3",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net/meta/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "76d8276184fc168b0605e815cff209bf9c0ac2d44355897fa49df810f113db4c",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net/aliases",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/credentials.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5e49f9e68fff79eefcd3d2214f47d0042418e8a500351e77dd07f3d35c751887",
+ "chksum_sha256": "3f64a7de906e34cbe83554d810e3fd34cd7a207cd50d6ce1639b332ec9c17ca6",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net/defaults",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/library",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_net/defaults/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/library/example_module.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b2f7ee850fd4a897e2bb408586fde1de3fcf1e73159f7d2fdb4451621c413fe0",
+ "chksum_sha256": "e673db3241a636fc7adc6a5671bc26c6c9801c023107f522d219e4da4c2940ba",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_snapshot",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_snapshot/tasks",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/meta/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/files",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_snapshot/tasks/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/files/isrg-x1.pem",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0cec8efd5dc95f14ef416b07ac065649e2a2919dcdea4268319802fc428ac742",
+ "chksum_sha256": "22b557a27055b33606b6559f37703928d3e4ad79f110b407d04986e1843543d1",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_snapshot/aliases",
+ "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/files/amazonroot.pem",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "80cb66fdb877239c68c5e09113e872a193adfbd8db7e2d0a2d80c716d0c03dae",
+ "chksum_sha256": "2c43952ee9e000ff2acc4e2ed0897c0a72ad5fa72c3d934e81741cbd54f05bd1",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_snapshot/defaults",
+ "name": "tests/integration/targets/module_utils_ec2/runme.sh",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "fbc264a0f164ac3c9f9ec885da5e7c9eda894f01d567b7e68e9031f556f50aa4",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/module_utils_ec2/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_snapshot/defaults/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "873903f9abb784a3e395685d19806c065347dad6f1ace7bc67638e3e842692e9",
+ "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key",
+ "name": "tests/integration/targets/module_utils_ec2/templates",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_ec2/templates/boto_config.j2",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ba7335ce0c8b8a32fc82bf7522a0f93d69190ff9895f4804985d2c08b7b3fd37",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key/tasks/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/templates/session_credentials.yml.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4d883bda1a880e1cffe467cb1196bbe840f1673e27f1fd54db12c6a9aea0c3ea",
+ "chksum_sha256": "6104b125462eb5b6c5e5067e6c5b9041f0804c29755200fda62f0472a4a29f1e",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_ec2/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key/meta/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/setup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b2abccc927cfe77a04c442fe4cb680cef1163c594f5b1b91afbb7769b8d392cf",
+ "chksum_sha256": "d416d3ebcd9ea58c450a07ec98a78f42423bde3fdf2396971c8af836169e7b17",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key/aliases",
+ "name": "tests/integration/targets/module_utils_ec2/ec2_connect.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9819a2eafef4b14f0c325412689a7b0cc4a3e2b364fb4af1304783caa1971c9b",
+ "chksum_sha256": "12b062e10935591224b45ef3b4d8541f11b22708f5a36baf61053529ae75c5db",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key/defaults",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_ec2/inventory",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4514e38376fcaaeb52cb4841f3aeeb15370a01099c19e4f2ed6a5f287a49b89a",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_key/defaults/main.yml",
+ "name": "tests/integration/targets/module_utils_ec2/connect_to_aws.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "26aad832330421916caec9fe34ebc8d1bfa90d867b66ad745f4c12ebe84cc3c3",
+ "chksum_sha256": "38a11bb8d17cc41967fa50689a92713ca14a7c53f896c4d40c772cd0854c6313",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds",
+ "name": "tests/integration/targets/lookup_aws_secret",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/runme.sh",
+ "name": "tests/integration/targets/lookup_aws_secret/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/lookup_aws_secret/tasks/main.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "20ef326de7a3c0f50b51c8598ed5edd3b58c7a1c408dc6b43fef9334d8379f91",
+ "chksum_sha256": "bceb9b224608a9b37a8953ee086bae72f265a45e97d61d19778c5b70fe1d89f9",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/templates",
+ "name": "tests/integration/targets/lookup_aws_secret/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_tag",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/templates/inventory_with_constructed.j2",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "0325381aecfd342ec4baa42347d91b8b2267b29bfb0b053443729370c906b749",
+ "name": "tests/integration/targets/ec2_tag/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/templates/inventory.j2",
+ "name": "tests/integration/targets/ec2_tag/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "046bbce61938b67a8f51d9e99de64b82a588659550436b858d10975ddaf716ce",
+ "chksum_sha256": "e6fb06892e32d700d8ff0184ce094dd44bae709b2e5728e88c43a1706beb614b",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/templates/inventory_with_cache.j2",
+ "name": "tests/integration/targets/ec2_tag/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_tag/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "54ede14b2ec95c3c6606905775d3885120039da90433e409ad5002ad78c65d5b",
+ "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks",
+ "name": "tests/integration/targets/ec2_tag/vars",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_populating_inventory_with_constructed.yml",
+ "name": "tests/integration/targets/ec2_tag/vars/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "caf7202e4c03ab6654cb6a4aa507af3c4ac42832ea0ac76a936502c6283b4260",
+ "chksum_sha256": "79db6a6656e23e90127a8759ccb5371abb6b58652f871c5e12c72d9387bec871",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_populating_inventory.yml",
+ "name": "tests/integration/targets/ec2_tag/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ba250c63bc4a8712322cca3f0b5b3d54057c8f53f464c2b27567a8f930cb301a",
+ "chksum_sha256": "c68801d5d9a4189a5e8f2bcc2b939f9d995786d81dcda63ab340812b8bfdfd26",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_refresh_inventory.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "d23d9fc75960645599aacb9b4796dcdead6938b92ca9abc4188609a9335d39eb",
+ "name": "tests/integration/targets/ec2_tag/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/empty_inventory_config.yml",
+ "name": "tests/integration/targets/ec2_tag/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "31b80c73e9e0abea01d5836da0de13fa1bf5a391313b4543ad8bdd2adfd415cf",
+ "chksum_sha256": "b756aced2d19afadd3589244b1937cc90f8a96f709d5ea966f6a55a96bc4d3a3",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/populate_cache.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "c408a3a38bbd609623acb08a01fc1c14638bc5984287ba81e7ff50938b8e73b7",
+ "name": "tests/integration/targets/ec2_vpc_igw",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_inventory_cache.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "79c8d37631bfbc5a896140e0c9ca74f4144f51d5a161da353fab4026ac797d8c",
+ "name": "tests/integration/targets/ec2_vpc_igw/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/create_inventory_config.yml",
+ "name": "tests/integration/targets/ec2_vpc_igw/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1424ca34811cf10a2176b56269860dcc9e82cdfc3e7bc91db10658aceb8f11e0",
+ "chksum_sha256": "0431dce6e7bd028af971303e7969cfcff5552a4c6d6bf6d6cf20c0f71d959f1a",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_invalid_aws_rds_inventory_config.yml",
+ "name": "tests/integration/targets/ec2_vpc_igw/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b22eb19a90f4ac43ea966bd586df79c6ada6ef3e6a6e46df2f5b65cf82e4f00a",
+ "chksum_sha256": "99b1514cbe706973df0b2b91dea44eb9222a080d9bffe5768656c3bdbe42c056",
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/test.aws_rds.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "name": "tests/integration/targets/ec2_vpc_igw/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/inventory_aws_rds/aliases",
+ "name": "tests/integration/targets/ec2_vpc_igw/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b6b7573399ec5210a67f93fa47cb62827da6839b4ce43490bbfa70d51e731259",
+ "chksum_sha256": "ba41c73b84da2a29f97375701091b2606096e9a07d3c3c0514a73f5e79c0fed2",
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir",
+ "name": "tests/integration/targets/ec2_ami",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/tasks",
+ "name": "tests/integration/targets/ec2_ami/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/default-cleanup.yml",
+ "name": "tests/integration/targets/ec2_ami/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e273324ab90d72180a971d99b9ab69f08689c8be2e6adb991154fc294cf1056e",
+ "chksum_sha256": "c2804af534b1ea00bc95dc548ed7a183a925e1f9cd5fbd2a7113b944de8ec627",
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/windows.yml",
+ "name": "tests/integration/targets/ec2_ami/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_ami/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e29ee6a8db94d6de88c8458762f594f05d906f454f7c9977fd618d52b09e52f0",
+ "chksum_sha256": "513205535169d91c98bcdbeab464e21787b6d9ae122c3eaebb1933591d615715",
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/main.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "766ab141899717320ba54e2bb1a6ba8cbc3cc7642d0023670154b49981ed1a91",
+ "name": "tests/integration/targets/ec2_ami/vars",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/windows-cleanup.yml",
+ "name": "tests/integration/targets/ec2_ami/vars/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3fd85bd6c3cf51c061eb221197d5653e5da0e101543b3c037f5066d6c73b1501",
+ "chksum_sha256": "8ac9125dea1e9dfcac93d6142fe3deb7f2d84c6f25c9c5ed72718073ad304fe9",
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/default.yml",
+ "name": "tests/integration/targets/ec2_ami/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2441ac1753320d2cd3bea299c160540e6ae31739ed235923ca478284d1fcfe09",
+ "chksum_sha256": "1931c614be41a33f3a57f0706aec1983e7787f891321385ea14097856cc6fa69",
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/handlers",
+ "name": "tests/integration/targets/ec2_ami/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_remote_tmp_dir/handlers/main.yml",
+ "name": "tests/integration/targets/ec2_ami/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "050157a29c48915cf220b3cdcf5a032e53e359bdc4a210cd457c4836e8e32a4d",
+ "chksum_sha256": "fda077db8f4b5063b06b862d71449c2d0dc861c927c5d5a6c048f491dc2924b6",
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3",
+ "name": "tests/integration/targets/ec2_vol",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/templates",
+ "name": "tests/integration/targets/ec2_vol/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/templates/put-template.txt.j2",
+ "name": "tests/integration/targets/ec2_vol/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d8c9f0fc47011f7279babb0a29cb8f7812e4037c757d28e258d81ab7e82ca113",
+ "chksum_sha256": "da55c92e770f03c0672df9eded6d54fbb3d83d22aa15e6b694635cbbd60e8f92",
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/tasks",
+ "name": "tests/integration/targets/ec2_vol/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/tasks/main.yml",
+ "name": "tests/integration/targets/ec2_vol/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4f29d8ef3e5e11de54a4547d6808a5c214e8452af9a1cb2849f7cb619d8f4432",
+ "chksum_sha256": "d9d9471e0b7c4b3af25303416606ec24bee117d9d75d4b24d81a1daff1d6832e",
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/meta",
+ "name": "tests/integration/targets/ec2_vol/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b925975bc65588c03b10f9b1c7bb812add0a4c802f6ea5ed9fc96ee1ca7f0c2b",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_vol/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/meta/main.yml",
+ "name": "tests/integration/targets/ec2_vol/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "6406c0bbbe832898fc958d854f7ced5ce2764f9a27212deee526e20c884b4256",
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/aliases",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "7be7e4db58d37a19ccc989c5ebf0be883e51742a81941e9d29fc6055885fc99d",
+ "name": "tests/integration/targets/ec2_eni",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/files",
+ "name": "tests/integration/targets/ec2_eni/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/files/test.png",
+ "name": "tests/integration/targets/ec2_eni/tasks/test_modifying_tags.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "bae277f309fbffab9590300ccc1e75805c9795bbcef69edfda22c5b2327e12ba",
+ "chksum_sha256": "e1cf49f0f4a7aa392e797a16f9ccd76469e4a34450a761db0dda611d78eed447",
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/files/hello.txt",
+ "name": "tests/integration/targets/ec2_eni/tasks/test_attachment.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c98c24b677eff44860afea6f493bbaec5bb1c4cbb209c6fc2bbb47f66ff2ad31",
- "format": 1
- },
- {
- "name": "tests/integration/targets/aws_s3/defaults",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "fc4e545021465b0f55e002bc6558f76a56d7069e5d434d7168238de2600d5db9",
"format": 1
},
{
- "name": "tests/integration/targets/aws_s3/defaults/main.yml",
+ "name": "tests/integration/targets/ec2_eni/tasks/main.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6540dfee340bb1255b63cae84f0ee9df35ceaa7443df77c396c381d89c59e858",
- "format": 1
- },
- {
- "name": "tests/integration/targets/module_utils_ec2",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "a2b7135e5accfb22690a635a9c009a857278fb393d359d6f0d413309d5e9344e",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/runme.sh",
+ "name": "tests/integration/targets/ec2_eni/tasks/test_eni_basic_creation.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fbc264a0f164ac3c9f9ec885da5e7c9eda894f01d567b7e68e9031f556f50aa4",
+ "chksum_sha256": "2bf26acaaceb9d041e9064ffc250c16ef93e8d5b06dde05b242de10632d9d8a5",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/inventory",
+ "name": "tests/integration/targets/ec2_eni/tasks/test_deletion.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4514e38376fcaaeb52cb4841f3aeeb15370a01099c19e4f2ed6a5f287a49b89a",
+ "chksum_sha256": "81b2131235b4b108521ecc267a90aaf2b9e8ec03a04bd97b667d27e7673b4aed",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/templates",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_eni/tasks/test_modifying_delete_on_termination.yaml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "5254eceba1d8492a0667fddf8576099ce3ce3a2bdfea899938cdadac61bf0fe9",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/templates/boto_config.j2",
+ "name": "tests/integration/targets/ec2_eni/tasks/test_ipaddress_assign.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ba7335ce0c8b8a32fc82bf7522a0f93d69190ff9895f4804985d2c08b7b3fd37",
+ "chksum_sha256": "d30bd3ab2a60e469d096a3c3dbfaa7a14309efe20674bf31db2b1c84eea4ca5c",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/templates/session_credentials.yml.j2",
+ "name": "tests/integration/targets/ec2_eni/tasks/test_modifying_source_dest_check.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6104b125462eb5b6c5e5067e6c5b9041f0804c29755200fda62f0472a4a29f1e",
+ "chksum_sha256": "fe62b6c02b10a2cc9afd20df974e512cd4aa28eee45803f143caffa3834cebaf",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/meta",
+ "name": "tests/integration/targets/ec2_eni/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/meta/main.yml",
+ "name": "tests/integration/targets/ec2_eni/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "e1d851188d9e6d7d833aabae61c46f0f9421f9138c6b348905598866242259c8",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/ec2_connect.yml",
+ "name": "tests/integration/targets/ec2_eni/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "12b062e10935591224b45ef3b4d8541f11b22708f5a36baf61053529ae75c5db",
+ "chksum_sha256": "9159c859ae9e7385c9e0765a72d38715c84dc1dd3323fef80625ad769a2b430f",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/setup.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "d416d3ebcd9ea58c450a07ec98a78f42423bde3fdf2396971c8af836169e7b17",
+ "name": "tests/integration/targets/ec2_eni/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/connect_to_aws.yml",
+ "name": "tests/integration/targets/ec2_eni/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "38a11bb8d17cc41967fa50689a92713ca14a7c53f896c4d40c772cd0854c6313",
+ "chksum_sha256": "f03fac61ee3fcda5b1602f1ffee6f24159080797c7c50b725b5ba1fc3d888ca1",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles",
+ "name": "tests/integration/targets/ec2_instance",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect",
+ "name": "tests/integration/targets/ec2_instance/roles",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/profiles.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "707e210e0248bb78f07974b075bd9d4f51431f2548e07512a8208f5ae33978a3",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/credentials.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/uptime.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3f64a7de906e34cbe83554d810e3fd34cd7a207cd50d6ce1639b332ec9c17ca6",
+ "chksum_sha256": "5b6e3e51a952012bf1a84ebba4a04b5b923ffcd0b08691a052629e03f51dd48a",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/main.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_setup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "76d8276184fc168b0605e815cff209bf9c0ac2d44355897fa49df810f113db4c",
+ "chksum_sha256": "1547abe0885a726723737654d6f9536acccd6c07808384207c8e96d061d73e4d",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/tasks/endpoints.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/cpu_options.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ab5dc8cea9fe6409d9e3b06981d1099b8f3fe5095cc456a4122409fcd375e9c4",
- "format": 1
- },
- {
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "ea52edf3afb8511a18ee32f49d863a194001e75222b73aa278c8377970970081",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/meta/main.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_cleanup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
+ "chksum_sha256": "ba34625ad63e3757415be91c3ef9fa890b461aee7de131b284860c2f56b3dd4a",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/library",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/default_vpc_tests.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2511a44a4a05e27a22371869c9e85d9c55c0c288e36be11cd52cbba91bc14de4",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/library/example_module.py",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/tags_and_vpc_settings.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e673db3241a636fc7adc6a5671bc26c6c9801c023107f522d219e4da4c2940ba",
+ "chksum_sha256": "34f1abf0ac6a235de7a9819415eabd7a9eb1b8a28267c9cc5bb1dd21e904bbeb",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/files",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/metadata_options.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "4ceba1d06b1d17f1d808817bc7da578e31a6b551f5d3314978e710deabae67c5",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/files/amazonroot.pem",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/block_devices.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2c43952ee9e000ff2acc4e2ed0897c0a72ad5fa72c3d934e81741cbd54f05bd1",
+ "chksum_sha256": "8e88a6167d048dcd5d2fc2a94f421ae131189f6f91a0bc5d60d00149ee708e62",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/ec2_connect/files/isrg-x1.pem",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/find_ami.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "22b557a27055b33606b6559f37703928d3e4ad79f110b407d04986e1843543d1",
+ "chksum_sha256": "eea2f9e22c8a7d663993722954278db5f6e8e36aa8e1fcb623c11a6ff55bfcbe",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/termination_protection.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "aabdec261b1450c762288a6d531e0528637f3f1b52fd68197b8322c265757914",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_minimal.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "ec4f8e2376f0d6e2b7fa718ae9530e980f5ff20e4b45eaaeb32ded0810663408",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/profiles.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "707e210e0248bb78f07974b075bd9d4f51431f2548e07512a8208f5ae33978a3",
+ "chksum_sha256": "8b2cda882415602e233d8fe6277951f51039c9253054a1bc16e1fbb701c10b12",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/credentials.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/ebs_optimized.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "96175177c117770c4dcbae1f55c2ae806a20c53932c55cf84b11a92024e83b7c",
+ "chksum_sha256": "102523edad46fc9c31cc173e3b2ab7eda70a065c365b5d2c5255b89887050a33",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/main.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/iam_instance_role.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "76d8276184fc168b0605e815cff209bf9c0ac2d44355897fa49df810f113db4c",
+ "chksum_sha256": "d71dc7a042d12fa3d18e4669605c38096442e6ad5052d47191c184b70863075d",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/tasks/endpoints.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/security_group.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "421c1c2dc4df830214661bffd4f86ebfaa3a172ee7fa7e85a1ce6e933d1b9a72",
+ "chksum_sha256": "132f8ae5c129b7ca2b2a650fa8d2b27eb67d81566a637d6b5587759040dab1e6",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_no_wait.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "20bc52d7fb0278c5038932eb7791dd7f70a5bedf1d0a6cd064a49c2de7e31046",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/meta/main.yml",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/checkmode_tests.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
+ "chksum_sha256": "692e03ff5986be87ad202edccd809308190e3dddf04050f4536d6f445bcb20c3",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/library",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/external_resource_attach.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "053b41c22ec2f2a77cecc4d2c37f3b1348e35239b35a4bf854947e4038db888c",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/library/example_module.py",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/state_config_updates.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "257a81fed0b3e0550a69f3ca5b193b5e5c368cbc66ea71871fc43295f87b561a",
+ "chksum_sha256": "7222b1c5cec056bf8678aa519d26872f3d7e0cd3ca0c62fbd980895e3fac6eb8",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/files",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/files/amazonroot.pem",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2c43952ee9e000ff2acc4e2ed0897c0a72ad5fa72c3d934e81741cbd54f05bd1",
+ "chksum_sha256": "b473037dc36d1dcddc856da097d5e9ef435482ac810b0864dbb141e32104f47d",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/roles/connect_to_aws/files/isrg-x1.pem",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "22b557a27055b33606b6559f37703928d3e4ad79f110b407d04986e1843543d1",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/files",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_ec2/aliases",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/files/assume-role-policy.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7be7e4db58d37a19ccc989c5ebf0be883e51742a81941e9d29fc6055885fc99d",
+ "chksum_sha256": "f1950c6acf71cbeef3bbb546a07e9c19f65e15cf71ec24d06af26532c9dfab68",
"format": 1
},
{
- "name": "tests/integration/targets/ec2",
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/roles/ec2_instance/defaults/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "da99562f16e63a475152187a5beca22897445bf0aaf8a0079e7d8e2dae8d3535",
"format": 1
},
{
- "name": "tests/integration/targets/ec2/tasks/main.yml",
+ "name": "tests/integration/targets/ec2_instance/runme.sh",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2ac244b40203f0e9432527c3373b0cd55b08f0c4581ef2cafb51eebf5bf07068",
+ "chksum_sha256": "d2e53b13c18d9f57b9ac05cf209ab9ea0db765e0b8c4e0698e26747cef903d23",
"format": 1
},
{
- "name": "tests/integration/targets/ec2/meta",
+ "name": "tests/integration/targets/ec2_instance/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2/meta/main.yml",
+ "name": "tests/integration/targets/ec2_instance/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "7aff756a1f2b4c502e4c245ae4ff1c6b4d66e163156dd0760b1d6c42d3e3d392",
"format": 1
},
{
- "name": "tests/integration/targets/ec2/aliases",
+ "name": "tests/integration/targets/ec2_instance/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7be7e4db58d37a19ccc989c5ebf0be883e51742a81941e9d29fc6055885fc99d",
+ "chksum_sha256": "43a79f9e03c470d61c291d38a590c1e4cec779164858faa4cb464f7cc9e5dd4f",
"format": 1
},
{
- "name": "tests/integration/targets/ec2/defaults",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/ec2_instance/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "502f20c73c82a63a03c7b10675eab08f4fff4baca6bd937c3557357d12e9f8f6",
"format": 1
},
{
- "name": "tests/integration/targets/ec2/defaults/main.yml",
+ "name": "tests/integration/targets/ec2_instance/inventory",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6ac3c7909ab221450ea97faf81f0951713c66b46192253f34b254b473369b7fd",
+ "chksum_sha256": "3d49dc4b0db2ad4f9b88b69bd882274ed2f77ad507eef48c7725a0441b8bf437",
"format": 1
},
{
- "name": "tests/integration/targets/aws_az_info",
+ "name": "tests/integration/targets/ec2_vpc_dhcp_option",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_az_info/tasks",
+ "name": "tests/integration/targets/ec2_vpc_dhcp_option/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_az_info/tasks/tests.yml",
+ "name": "tests/integration/targets/ec2_vpc_dhcp_option/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5b3094196fc15f7a04f146d6f5340557c07a236179983a1a22199d4fcf2171a3",
+ "chksum_sha256": "26cc856de01f82f19ab3e52dbed151f02055b8fbb2f186eb3c15d0218e5df571",
"format": 1
},
{
- "name": "tests/integration/targets/aws_az_info/tasks/main.yml",
+ "name": "tests/integration/targets/ec2_vpc_dhcp_option/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "bdfd79f34b90502c52cc59580cf68374eb252cce697bdb26b4f7925638ecd4b6",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "tests/integration/targets/aws_az_info/meta",
+ "name": "tests/integration/targets/ec2_vpc_dhcp_option/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/aws_az_info/meta/main.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "1e8d632f9db7209967c5b2f6d734bede09841acc7b898dafc19f31c72cee9929",
- "format": 1
- },
- {
- "name": "tests/integration/targets/aws_az_info/main.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "47d7f0170663266b9c80b357a113128c721f64f7782736c399471404ef6170be",
- "format": 1
- },
- {
- "name": "tests/integration/targets/aws_az_info/aliases",
+ "name": "tests/integration/targets/ec2_vpc_dhcp_option/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "46949b818113f294754176e1bbd4620f3f320a9d985fd690da597e002db91ef6",
+ "chksum_sha256": "a1a63e4e346ae31af24867279086058701f3bdb09586918e6451fc4766459488",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_dhcp_option",
+ "name": "tests/integration/targets/lookup_aws_account_attribute",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_dhcp_option/tasks",
+ "name": "tests/integration/targets/lookup_aws_account_attribute/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_dhcp_option/tasks/main.yml",
+ "name": "tests/integration/targets/lookup_aws_account_attribute/tasks/main.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "490bb182ad2f316922f891167aa5f62da65456b27dedb63c524194fa0cfb87bd",
+ "chksum_sha256": "6500060d4ee06642300066f277634203e32639982b32220c5d31e96d775a6cbd",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_dhcp_option/aliases",
+ "name": "tests/integration/targets/lookup_aws_account_attribute/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "dfd08779f7170cd572bd30bd277562ab0ba7d35bf6f94c67bf194ffb04c77ea9",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_dhcp_option/defaults",
+ "name": "tests/integration/targets/module_utils_waiter",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_dhcp_option/defaults/main.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "a1a63e4e346ae31af24867279086058701f3bdb09586918e6451fc4766459488",
+ "name": "tests/integration/targets/module_utils_waiter/roles",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb",
+ "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/tasks",
+ "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/tasks/main.yml",
+ "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "acfd289610d14642bef5ae982d7098b7259a5b1ac01f18c835721b0aa612e7c3",
+ "chksum_sha256": "0ba97256d76043838f14cc1e067aeb46643d4c1d40defca3f8332fe8c2de157a",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/meta",
+ "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/library",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/meta/main.yml",
+ "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/library/example_module.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "bc44c40027380e6a9a3a956be9f78bec67c8380287860c7db30f0f03d9e76cee",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/vars",
+ "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/vars/main.yml",
+ "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a37372c5df29a85df3d7759887f11d5caceba506dfd51e32059f86f8fa879c8b",
+ "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/aliases",
+ "name": "tests/integration/targets/module_utils_waiter/runme.sh",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9819a2eafef4b14f0c325412689a7b0cc4a3e2b364fb4af1304783caa1971c9b",
+ "chksum_sha256": "b36bef221fbf1264fb6d387a52e5ca42d167ef7973225a30c7cd6005d6494ca4",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/defaults",
+ "name": "tests/integration/targets/module_utils_waiter/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_elb_lb/defaults/main.yml",
+ "name": "tests/integration/targets/module_utils_waiter/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "06a9acc0627c1ed030eca26a0013f8044e3001105a91983dd26a9a1f55599106",
+ "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_waiter/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_waiter/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2013d9803d3dfbf66388e1ef4228f2d74d348f524c01c3018bc7b464c0ec88b8",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/tasks/main.yml",
+ "name": "tests/integration/targets/module_utils_waiter/inventory",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e43ccfc02d4588f0df454c2d08c4a0afe8f4c058a5abc3ad7bedce2fbec344cf",
+ "chksum_sha256": "4514e38376fcaaeb52cb4841f3aeeb15370a01099c19e4f2ed6a5f287a49b89a",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/meta",
+ "name": "tests/integration/targets/inventory_aws_ec2",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/meta/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/runme.sh",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "77f0aee24edf123eeb5d2537829366a24d3dfdcda5dea2adb15b152c2c4ce88d",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/vars",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/vars/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/populate_cache.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8ac9125dea1e9dfcac93d6142fe3deb7f2d84c6f25c9c5ed72718073ad304fe9",
+ "chksum_sha256": "498cfcf7efc5761cba09e57a85481daa5f4624efba1e16d0ebb41b7bca5ee0ac",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/aliases",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_invalid_aws_ec2_inventory_config.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4ab18b39f059ed5526197677441560c3d4d1be8ee846a32da74893a3e5f133bb",
+ "chksum_sha256": "4a57efe1ec08416ea90c3a80c03d0d3240a928933d5f46251acf97c9375b0a0e",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/defaults",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b106059917e67c3143a9f6d8142f7e5495bb9a81593a645c1497123bc556f534",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_ami/defaults/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/empty_inventory_config.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9623c681ca59d28bbc7ba1e8f4d03d2a8205f0cfd6686eb7ba87a820fb619303",
+ "chksum_sha256": "44a9f1885f675a872bebe0a1af0c40551688c8ccc1aeb700e74926a8edf69278",
"format": 1
},
{
- "name": "tests/integration/targets/aws_caller_info",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/create_inventory_config.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "605a7f63f212908dfca5b8f52a01def2c2cb06500c4c4bc33f7356d6b4eb35d9",
"format": 1
},
{
- "name": "tests/integration/targets/aws_caller_info/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_concatenation.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "c462b0dbec58aa61c0b73aaed918ef2a7d68b2ec9faa18d1d522f78057411283",
"format": 1
},
{
- "name": "tests/integration/targets/aws_caller_info/tasks/main.yaml",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/setup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ee3b4355d2876a8648831474ce0b430c22c21035551ba77c0a125f4e2866a0e8",
+ "chksum_sha256": "e456a7c07087e283ab440e316b6e4563381c3e5dc84daf1130c6b288696a2a1c",
"format": 1
},
{
- "name": "tests/integration/targets/aws_caller_info/aliases",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/tear_down.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9819a2eafef4b14f0c325412689a7b0cc4a3e2b364fb4af1304783caa1971c9b",
+ "chksum_sha256": "fb7b069cb3653ca58ed6793a4e85a414ea6e9843fba4547a2768367fc4fbe7c3",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_inventory_cache.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "f343af936f7105a81f703b55b5ed86bd3aab8b35ca6dc0672c5e5cca8dda3c16",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_constructed.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "99c76c175488e05045ae6030db411dfdbca54607d087756d5906d723eaccb9a5",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet/tasks/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_include_or_exclude_filters.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8b64ebd46dd54314cfd660b03486d5786ab663547a43b5cca0a653159b709fc1",
+ "chksum_sha256": "dee03378a2649c212a5c9b2c27407bdb928944740ff3a1e917a106e45c29aef0",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/create_environment_script.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7fab6eae1dd6c175c638a8d241f65106938c898687eb864e8c2dd1c5b7761ed2",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet/meta/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_refresh_inventory.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "06f7fada2b3d684329de1bce7e46970733ae4614ffc7878fa406cf45bdc46cda",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet/aliases",
+ "name": "tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_use_contrib_script_keys.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1614b27ae5be38ffd45279f1196cb4eff20b11505203386e69d71d600834c6e8",
+ "chksum_sha256": "e355fd30d06e1fe489a771f376736308eb0f573227b746fd668d0b9b9017e113",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet/defaults",
+ "name": "tests/integration/targets/inventory_aws_ec2/templates",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_vpc_subnet/defaults/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory.yml.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8b0b9969b59c016232538a26a80f2befd29e452824aeb16723213291153e035a",
+ "chksum_sha256": "7b9771837ad83a89cc76088bf8aa09be6f6d5e8c980f3ed4d72fb41fcb192af6",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_concatenation.yml.j2",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "6777b9eaea5da24615ec813fcab4f75cfd6fb02870eff6021fad80ca104f505b",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/runme.sh",
+ "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_constructed.yml.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b36bef221fbf1264fb6d387a52e5ca42d167ef7973225a30c7cd6005d6494ca4",
+ "chksum_sha256": "43bad11c0867b7e50eba2a7319c390c4014e8f14817bf4e7ceb415e2dddc0f32",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/inventory",
+ "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_cache.yml.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4514e38376fcaaeb52cb4841f3aeeb15370a01099c19e4f2ed6a5f287a49b89a",
+ "chksum_sha256": "212ab399c1d1f49130ad7755b99b619def84d42129f1a7d4e66ba24fcbd76c10",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_use_contrib_script_keys.yml.j2",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "2cfe0d4af5b96e2dba042d80cb5de7dd62eb3eff3d1203486aadf76a9119c881",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/meta/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_template.yml.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "afa5f9d7fc119409ecb2e6b5f45409ed738750034d7d96fc34580d64dd84b811",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/main.yml",
+ "name": "tests/integration/targets/inventory_aws_ec2/templates/inventory_with_include_or_exclude_filters.yml.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "2013d9803d3dfbf66388e1ef4228f2d74d348f524c01c3018bc7b464c0ec88b8",
+ "chksum_sha256": "921fb290a6b74b12988cce58a07ca052396ebd9d8313e7affca103a32b27b022",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_ec2/aliases",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter",
+ "name": "tests/integration/targets/inventory_aws_ec2/test.aws_ec2.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/elb_classic_lb",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/tasks",
+ "name": "tests/integration/targets/elb_classic_lb/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/tasks/main.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_cross_az.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0ba97256d76043838f14cc1e067aeb46643d4c1d40defca3f8332fe8c2de157a",
+ "chksum_sha256": "e8b93f83e77ab10e1582f3a43b75f65d58a538bc9b77d6b6c4ca843f05518bb2",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_logging.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "3430cacaba58d46881257a2937543d212391d6ab5224a9664ed1b91381bcf42b",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/meta/main.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/setup_instances.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
+ "chksum_sha256": "3195bd8634e5ad33e371dad0a41e0d4cd1a04e45011fa0e1d2c422a8f5a83221",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/library",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/describe_region.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "b225677a511f60f3a5588079fefafa7b503f26eb4f8e09d400462ae33a28400a",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/roles/get_waiter/library/example_module.py",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/basic_public.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "bc44c40027380e6a9a3a956be9f78bec67c8380287860c7db30f0f03d9e76cee",
+ "chksum_sha256": "21037708db901115f2e6cde37fac4f71998074c76af61e3bdf5f747914754236",
"format": 1
},
{
- "name": "tests/integration/targets/module_utils_waiter/aliases",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/setup_vpc.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9819a2eafef4b14f0c325412689a7b0cc4a3e2b364fb4af1304783caa1971c9b",
+ "chksum_sha256": "2250983fedb6d723cbc06a5e5cd5be17d9e03ea9043e10756e9960038107d73a",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_stickiness.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "59f0345b108aa516d87e25c13f8cefc5367f2d2b6eff55f09854556435343db8",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/runme.sh",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/cleanup_instances.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d2e53b13c18d9f57b9ac05cf209ab9ea0db765e0b8c4e0698e26747cef903d23",
+ "chksum_sha256": "02b1c64c3cd27653e179cab4f84f5a7641b06aaf3dcaf8bc85c14b522a9016fb",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/inventory",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/cleanup_vpc.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "88b92b8c6612050e67525f2284089924f4fbd89d528ead7cba3f8583ec0770c3",
+ "chksum_sha256": "4bcc5651ada4b1cba51d8969c56edce1eeac0d0349aa51fe89e4fc63add70cc0",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_instances.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "869acc8fdbbbb0b468fd981987f250f6b7fde86899d8aceaa675ac11a5e48d62",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/meta/main.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_proxy_policy.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f8e70400d05c6717b0a358ccb693d782e120d9dd534973734b083406a08c7525",
+ "chksum_sha256": "d1bf8b8792a4f1a2ba42339a78a77308dcbb9446d97921cc61e2a2daddac2906",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/main.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/schema_change.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8395f20d527042f70de0e5a24a1db4d728bac43bcde06c3ac053c885774e0e6a",
+ "chksum_sha256": "ce8d10c0731df0d8dba2b49787cada5d84b3262686a3e282a4b90946ed1e0814",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/basic_internal.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "bc61ae4f3c58a152986296e41eb968be76b9a6d3ba1c6d2b167d420a2ab49f88",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "d7ff2e9aa050982d3d61ad487f7b6d59da671be132d79da6a83afa74a1bff23f",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/templates",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_listeners.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "daba3ca272e15b5a5c25b9433892dc3705a63048a1c5f298be6fd87e85303495",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy.json",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_healthcheck.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "67f56078d3189e6ed7b781f96399453df8a1862659886a5ad595420a2018f380",
+ "chksum_sha256": "eafbc65e9ea7a3966e778640a1c309d418a93a5cfb2ec765adbca06b25cdc301",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy-updated.json",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/missing_params.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ac915127d3f199f92c67fd3ddf98b30186fe3eeb95f362f6d1bfe0c667e96b63",
+ "chksum_sha256": "e7a36b4d4849e75bb9d813d27ead50dea429b804e625bd86b4fcaa4b1c4d0bb9",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_changes.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "07e3b6e530fd110822e2c17eef6615088d337fee9696b5221ec8f6b35b0f4699",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/complex.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_idle_timeout.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fd8459be71890eb3f694f433535114246b2f7d7297a1746e46093734d790ef92",
+ "chksum_sha256": "e42a9726ce505e5b8992c431e530de69055d6bed8a731e60fc6b3776935729ef",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/tags.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_draining_timeout.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "eb9f4e0fc85a60bb15ba2bbc5036d8c38ec0e7ac8d9ee52566644f169e569afb",
+ "chksum_sha256": "3746d258fb19751d61a7895aa25c59e7be07fc4dc1b85ee581697e083ddd8b0f",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/public_access.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_tags.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "4015b3e0a746f3994ac7bf2d56f8ca5a876f1ad6046841ca85f4ff7156efd30e",
+ "chksum_sha256": "08f3462710bda1a06157a465c041528c800ee5e77060774e288185489428b2f0",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_sse.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/simple_securitygroups.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a3d3210c2c7473e9271cca042f4abd5001b7f5ddf1ac9c42527a3a1a66fe8c81",
+ "chksum_sha256": "da7a7902701108067ceee62a4144f280a52d636866df4ce75477fb846a371b2c",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/main.yml",
+ "name": "tests/integration/targets/elb_classic_lb/tasks/setup_s3.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "eb329e1232fcd539f96bda674734113096dac7d481948b0cec7cb375866ce8db",
+ "chksum_sha256": "9cfabccd72d651f5239920f5b33a54de4a7a815ec712af1a90d60ba75d1f4894",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/elb_classic_lb/tasks/cleanup_s3.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "7fc910cb3baf887ed67babc96b70a0d546b8c9db6d041566bc03748da6bcbad4",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/elb_classic_lb/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/elb_classic_lb/meta/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_kms.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "3875b9d162c2ed0d5122485a60747cbeab4c12f63e19043721ada6e750a8f558",
+ "name": "tests/integration/targets/elb_classic_lb/vars",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/missing.yml",
+ "name": "tests/integration/targets/elb_classic_lb/vars/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "cedf599310d119d5bc70742ebbad144ed648b61b342e01dd9ef20ea744d2e4a3",
+ "chksum_sha256": "a37372c5df29a85df3d7759887f11d5caceba506dfd51e32059f86f8fa879c8b",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/simple.yml",
+ "name": "tests/integration/targets/elb_classic_lb/templates",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/elb_classic_lb/templates/s3_policy.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ecbc66cf1dbc8f863fb52ef8d07271c1c3bd3a94c906ec43024a9b1d325eaba1",
+ "chksum_sha256": "de059f94471359d3123d3cdf2b2cdf0ed239d44cbabcf093314cb6991b266116",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/dotted.yml",
+ "name": "tests/integration/targets/elb_classic_lb/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "381ebad3ee826f273b781bf8a279b76fa7265204db4a0e4a06a05989b694a7a0",
+ "chksum_sha256": "16a2c2f6008f6c2e62fc1a566539679ea95ffa546fe82071a6be5f0d8a0d0f33",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/meta",
+ "name": "tests/integration/targets/elb_classic_lb/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/meta/main.yml",
+ "name": "tests/integration/targets/elb_classic_lb/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f8e70400d05c6717b0a358ccb693d782e120d9dd534973734b083406a08c7525",
+ "chksum_sha256": "eb85aa12a3d8daa9667713f29dc3b65029810e3acd5c478e7b7598ffd18f9a64",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/defaults",
+ "name": "tests/integration/targets/ec2_vpc_nat_gateway",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/defaults/main.yml",
+ "name": "tests/integration/targets/ec2_vpc_nat_gateway/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_vpc_nat_gateway/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "17d1ee3af0799937fea09c67d39b2fa6db3011eed3a66b35a1efecfd37e2f5eb",
+ "chksum_sha256": "d938fae7cc9a6073d340010223d51030470ad83ffd5904877b87e5e33ac2f267",
"format": 1
},
{
- "name": "tests/integration/targets/s3_bucket/aliases",
+ "name": "tests/integration/targets/ec2_vpc_nat_gateway/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "dfd08779f7170cd572bd30bd277562ab0ba7d35bf6f94c67bf194ffb04c77ea9",
+ "chksum_sha256": "0296b292955d142eda71a0b97b5c1f0ce2575f37502cc91f62c0e6c0a6643430",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni",
+ "name": "tests/integration/targets/ec2_vpc_nat_gateway/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks",
+ "name": "tests/integration/targets/ec2_vpc_nat_gateway/defaults/main.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "93704cdd612081cd2ca9e64a6bbfc0b8d1be1926b1df0408d98af1b05cff988b",
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/inventory_aws_rds",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/test_modifying_delete_on_termination.yaml",
+ "name": "tests/integration/targets/inventory_aws_rds/runme.sh",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7f50e7e1c03369199470cbdf786672ed4fd162c4e3e3dd74126f59aed277a5ac",
+ "chksum_sha256": "20ef326de7a3c0f50b51c8598ed5edd3b58c7a1c408dc6b43fef9334d8379f91",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/test_modifying_source_dest_check.yaml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "36f45d56422d2399cec3b26a7d5c1e82c60ddc64c191b20f1cd624474d4c7ce1",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/test_ipaddress_assign.yaml",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/populate_cache.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f5c8339ca1e030209fb906c72b1bee2bfa3341332a47a4f1326aaf843796d0c2",
+ "chksum_sha256": "c408a3a38bbd609623acb08a01fc1c14638bc5984287ba81e7ff50938b8e73b7",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/test_eni_basic_creation.yaml",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_populating_inventory.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fc992583af2fdb54fa4ce80014c5f3b821538eca245c23b310fe35a5c4a4764d",
+ "chksum_sha256": "ba250c63bc4a8712322cca3f0b5b3d54057c8f53f464c2b27567a8f930cb301a",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/test_modifying_tags.yaml",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/empty_inventory_config.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9df827ae5ae90964bcb7635fdb2b97fa64247db11b80f4bbb318e87ddad9360f",
+ "chksum_sha256": "31b80c73e9e0abea01d5836da0de13fa1bf5a391313b4543ad8bdd2adfd415cf",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/test_attachment.yaml",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/create_inventory_config.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ce197a62dec62e6fb9e0154f58ad32e8d2f68bc883a2498adbf838cd087c0be3",
+ "chksum_sha256": "1424ca34811cf10a2176b56269860dcc9e82cdfc3e7bc91db10658aceb8f11e0",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/test_deletion.yaml",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_inventory_cache.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "10ad55a7d48756d905cc88cd4d782a8a4433dcf5d45974cf5d0eab0874d3a884",
+ "chksum_sha256": "79c8d37631bfbc5a896140e0c9ca74f4144f51d5a161da353fab4026ac797d8c",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/tasks/main.yaml",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_populating_inventory_with_constructed.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "cf482252171594376f3002630fc71ddca1a9d7f3385746bf12e29ada86d23013",
+ "chksum_sha256": "caf7202e4c03ab6654cb6a4aa507af3c4ac42832ea0ac76a936502c6283b4260",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/aliases",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_invalid_aws_rds_inventory_config.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e9f847717a0e89090d0e86bcdcc7c8bc9345965e40db00844b1f04ab337bb0b6",
- "format": 1
- },
- {
- "name": "tests/integration/targets/ec2_eni/defaults",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "b22eb19a90f4ac43ea966bd586df79c6ada6ef3e6a6e46df2f5b65cf82e4f00a",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_eni/defaults/main.yml",
+ "name": "tests/integration/targets/inventory_aws_rds/playbooks/test_refresh_inventory.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e63089a34e6352d80cece0c5551d5a43c560295facbb549e9277c2c3e113afa2",
- "format": 1
- },
- {
- "name": "tests/integration/targets/ec2_tag",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "d23d9fc75960645599aacb9b4796dcdead6938b92ca9abc4188609a9335d39eb",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/tasks",
+ "name": "tests/integration/targets/inventory_aws_rds/templates",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/tasks/main.yml",
+ "name": "tests/integration/targets/inventory_aws_rds/templates/inventory.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e6fb06892e32d700d8ff0184ce094dd44bae709b2e5728e88c43a1706beb614b",
- "format": 1
- },
- {
- "name": "tests/integration/targets/ec2_tag/meta",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "046bbce61938b67a8f51d9e99de64b82a588659550436b858d10975ddaf716ce",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/meta/main.yml",
+ "name": "tests/integration/targets/inventory_aws_rds/templates/inventory_with_cache.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
+ "chksum_sha256": "54ede14b2ec95c3c6606905775d3885120039da90433e409ad5002ad78c65d5b",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/vars",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/inventory_aws_rds/templates/inventory_with_constructed.j2",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "c1d723e784e6b7d66b15519e612c6758a132fd8cd814fa68959929fc9f577294",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/vars/main.yml",
+ "name": "tests/integration/targets/inventory_aws_rds/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "79db6a6656e23e90127a8759ccb5371abb6b58652f871c5e12c72d9387bec871",
+ "chksum_sha256": "b6b7573399ec5210a67f93fa47cb62827da6839b4ce43490bbfa70d51e731259",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/aliases",
+ "name": "tests/integration/targets/inventory_aws_rds/test.aws_rds.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5b94eec6f2beccdb47497babc7fb72cbd169679aac13d799e3750df752fd96d0",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/defaults",
+ "name": "tests/integration/targets/module_utils_core",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_tag/defaults/main.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "b756aced2d19afadd3589244b1937cc90f8a96f709d5ea966f6a55a96bc4d3a3",
+ "name": "tests/integration/targets/module_utils_core/roles",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_sshkey",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_sshkey/tasks",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/setup_sshkey/tasks/main.yml",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/endpoints.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "016d4c789eb18f7d6cd03c5cd32300a5e1ccec5bb6695aa35b60105c0a5d5d2d",
+ "chksum_sha256": "7c8d0f5147bcb991f8f393e55d775d1eb135b38e5704f53ef2944efa85fc8d8d",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/profiles.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "bbc4902462428729910b71ea6bd2fb11013fad58e998be8f9f2e4a86f97a8387",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts/runme.sh",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "bc4362a0e08261f353f20a25bdff675183addfdca62c700c6d04315efb908f47",
+ "chksum_sha256": "b63ff3b3058da02396d2322c56e9fe7dd6ed282a247bcc841647ee7dab6e2127",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts/templates",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/credentials.yml",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "90995fadd544d2ac3490121a30cd7414fdb89495231bdf16535a6b6c7d491638",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts/templates/inventory.j2",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/tasks/ca_bundle.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1bf87d023bb7e807f4e8be7408e0fe4400d451d740402e7e9a64fbe0cc41d75b",
+ "chksum_sha256": "96f95ee62565f62141122c6ebf63bb25d472f88135703716f395ba64c8ed30d3",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts/playbooks",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/library",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts/playbooks/test_metadata.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "fe1ef6e4b1c97be4c5f0ce71c95e5540140067c5ca44635ce5657062617c88d5",
- "format": 1
- },
- {
- "name": "tests/integration/targets/ec2_metadata_facts/playbooks/setup.yml",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/library/example_module.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9e6b33f19f7793a052f40551fa192acbe22874fdcc54576eb975c2eb04f57a80",
+ "chksum_sha256": "6bcaf886524922e05fae62d6b7efefd576925c7148e948fe0b43ba41f14bdb47",
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts/playbooks/teardown.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "e320cdd4309239a31c1a8733695308c6910c599faff3f6d0e62cc46fac3d178d",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/ec2_metadata_facts/aliases",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "04933e57c67f5052c8be5df91ee0656dd0af863d71c77dad580844d1f5b81c3c",
+ "chksum_sha256": "6a143b3afe5a63a2455faaeaaa91684381fd151bb0564127c55053fe24a23b63",
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/files",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation/tasks",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/files/isrg-x1.pem",
+ "ftype": "file",
+ "chksum_type": "sha256",
+ "chksum_sha256": "22b557a27055b33606b6559f37703928d3e4ad79f110b407d04986e1843543d1",
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation/tasks/main.yml",
+ "name": "tests/integration/targets/module_utils_core/roles/ansibleawsmodule.client/files/amazonroot.pem",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8d30da361c83809a0f40174c6afcb178732ec65238b71560e4b9ec1ba796be4b",
+ "chksum_sha256": "2c43952ee9e000ff2acc4e2ed0897c0a72ad5fa72c3d934e81741cbd54f05bd1",
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation/aliases",
+ "name": "tests/integration/targets/module_utils_core/runme.sh",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "925e63579970a8ba468c7dac6a31faf1c79effcac055ace429b915e87b81044b",
+ "chksum_sha256": "0d48d5adc889ec75147bf7ed1200f2cd1cde582de74e2523b9687e0204167cb5",
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation/files",
+ "name": "tests/integration/targets/module_utils_core/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation/files/update_policy.json",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "bcb41e725f7fae8be4356633beb391dd1870e344d626b105a3e2f14f3b3e5e96",
- "format": 1
- },
- {
- "name": "tests/integration/targets/cloudformation/files/cf_template.json",
+ "name": "tests/integration/targets/module_utils_core/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5f612313fe9e8c40c55eba290f6af3b814a3702cf728a6c5630e24f0e8787fa8",
+ "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation/defaults",
+ "name": "tests/integration/targets/module_utils_core/templates",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "tests/integration/targets/cloudformation/defaults/main.yml",
+ "name": "tests/integration/targets/module_utils_core/templates/boto_config.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9749b5e6b87158663ff5b736c1a71d8fb6f1e80feefbc98eb1c9e37af5430202",
+ "chksum_sha256": "ba7335ce0c8b8a32fc82bf7522a0f93d69190ff9895f4804985d2c08b7b3fd37",
"format": 1
},
{
- "name": "tests/.gitignore",
+ "name": "tests/integration/targets/module_utils_core/templates/session_credentials.yml.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e33e9227e6fb67d4bf8c2e9b095ed2d6d324684dadf237cf749467f92d14e0f4",
- "format": 1
- },
- {
- "name": "tests/sanity",
- "ftype": "dir",
- "chksum_type": null,
- "chksum_sha256": null,
+ "chksum_sha256": "6104b125462eb5b6c5e5067e6c5b9041f0804c29755200fda62f0472a4a29f1e",
"format": 1
},
{
- "name": "tests/sanity/ignore-2.10.txt",
+ "name": "tests/integration/targets/module_utils_core/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "87432aff1afef553752613182ab37500a1d0086cfe8bd2d36a11ede01fd56d72",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "tests/sanity/ignore-2.11.txt",
+ "name": "tests/integration/targets/module_utils_core/setup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b64ecf691fc1b93d347d1c185f7808b4e55c3363f024f47902c70c61d799c152",
+ "chksum_sha256": "d416d3ebcd9ea58c450a07ec98a78f42423bde3fdf2396971c8af836169e7b17",
"format": 1
},
{
- "name": "tests/sanity/ignore-2.12.txt",
+ "name": "tests/integration/targets/module_utils_core/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3df5cd696ea1a3bbb09e7c2e0bc2b9e3eb590cef29b0ccfb85866cf99054e128",
+ "chksum_sha256": "40fd2ac4ad62f120b0ab06ebc1b597f8df9a56b02772cff353ac457aa7cc6023",
"format": 1
},
{
- "name": "tests/sanity/ignore-2.9.txt",
+ "name": "tests/integration/targets/module_utils_core/inventory",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fc007e1d7ff0f3f0e79d762094eb13f66944e770a7ef05db1fe75c5fe2b291e2",
+ "chksum_sha256": "4514e38376fcaaeb52cb4841f3aeeb15370a01099c19e4f2ed6a5f287a49b89a",
"format": 1
},
{
- "name": ".github",
+ "name": "tests/integration/targets/ec2_metadata_facts",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": ".github/BOTMETA.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "dbef302faa0f7585d254d59d95d0505439564b68a7d5c2484089a25f8c1c4034",
- "format": 1
- },
- {
- "name": ".github/settings.yml",
+ "name": "tests/integration/targets/ec2_metadata_facts/runme.sh",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "cb31353134cff7d91b546a03cc6fec7caaf0dba62079ea66776e2994461e6c7b",
+ "chksum_sha256": "bc4362a0e08261f353f20a25bdff675183addfdca62c700c6d04315efb908f47",
"format": 1
},
{
- "name": "meta",
+ "name": "tests/integration/targets/ec2_metadata_facts/playbooks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "meta/runtime.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "e948a061b4225e12a11e60a79ca13c90e58bedecb7544fac029d35b3d48deb08",
- "format": 1
- },
- {
- "name": "requirements.txt",
+ "name": "tests/integration/targets/ec2_metadata_facts/playbooks/setup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b53f558c89560c3ce433e583abe819def7d3857bd60be1e2481e7d6d68bd0017",
+ "chksum_sha256": "7f3f47d4b7945aa0cfc6913bf39f548bf0aa1d41de4a2e90196c3da05a78ccb5",
"format": 1
},
{
- "name": "COPYING",
+ "name": "tests/integration/targets/ec2_metadata_facts/playbooks/teardown.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0ae0485a5bd37a63e63603596417e4eb0e653334fa6c7f932ca3a0e85d4af227",
+ "chksum_sha256": "b57ee9341470080550345a3ea82cffb34242e4a30c6f5d486d11117cb5df072a",
"format": 1
},
{
- "name": "README.md",
+ "name": "tests/integration/targets/ec2_metadata_facts/playbooks/test_metadata.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b6ced4d4721056e0ca90bf4329ef03643e49a215ddfdb76b0fac7d20164a2667",
+ "chksum_sha256": "e2102c7d6383bc534cf250be3804b17968603f2067d0655cf0283167b3b737a7",
"format": 1
},
{
- "name": "changelogs",
+ "name": "tests/integration/targets/ec2_metadata_facts/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "changelogs/config.yaml",
+ "name": "tests/integration/targets/ec2_metadata_facts/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a5108e9a705d8037b5e214c95ff2bba76e09c8ff4c391c144f1f6f7a5edb051f",
+ "chksum_sha256": "ca274181f03157b24c66138a166f4828a87f59f8f72363c3f4c3faaaaf622d00",
"format": 1
},
{
- "name": "changelogs/fragments",
+ "name": "tests/integration/targets/ec2_metadata_facts/templates",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "changelogs/fragments/.keep",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
- "format": 1
- },
- {
- "name": "changelogs/changelog.yaml",
+ "name": "tests/integration/targets/ec2_metadata_facts/templates/inventory.j2",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "b9af0561ec4b9ed5fca740245a65cd6d1cc4a7c697718b9e8ef3a28dc69c3c51",
+ "chksum_sha256": "7fefc789687e118e0dcfb4d97761ad263e60b95c89d7df4a8ae70fe15e3a80f6",
"format": 1
},
{
- "name": ".gitignore",
+ "name": "tests/integration/targets/ec2_metadata_facts/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5a00777ca107231dc822535458402764507be2cf2efa433ea184bb2163e07027",
+ "chksum_sha256": "97f428f0a0b9af23b032e8880698b3289015c4422cbda20c8984561db42f7482",
"format": 1
},
{
- "name": "plugins",
+ "name": "tests/integration/targets/ec2_vpc_route_table",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/inventory",
+ "name": "tests/integration/targets/ec2_vpc_route_table/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/inventory/__init__.py",
+ "name": "tests/integration/targets/ec2_vpc_route_table/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "330ed73839c1ca07a8472dd546e00620d0524b317a1d165939dcccc34869a970",
"format": 1
},
{
- "name": "plugins/inventory/aws_ec2.py",
+ "name": "tests/integration/targets/ec2_vpc_route_table/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_vpc_route_table/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7a145140debc1cc549aa22039ce71595e021aec3e7de578537551b02f7c5da1f",
+ "chksum_sha256": "586eef32b32533ff0e2273e51cca333ebcd285c364136506167b8b1ec89f26d6",
"format": 1
},
{
- "name": "plugins/inventory/aws_rds.py",
+ "name": "tests/integration/targets/ec2_vpc_route_table/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "16c30b17276b5d65eb1df809875217096dc4be578f6dbaa84e661bcba51dbded",
+ "chksum_sha256": "e58dacf316307d4b9f9776dd19ef8804a21852b9cbbd7582db2bc3a3fc221a58",
"format": 1
},
{
- "name": "plugins/action",
+ "name": "tests/integration/targets/ec2_vpc_route_table/defaults",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/action/__init__.py",
+ "name": "tests/integration/targets/ec2_vpc_route_table/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "c95552ad4ff55abe7c4f3e6efb77422f5fd33dbcec80b4ba490cef31dd52f198",
"format": 1
},
{
- "name": "plugins/action/aws_s3.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "348e233ca01687aa88c78606f8721f8be738f26163860cfa14dfa80eb10673a7",
+ "name": "tests/integration/targets/setup_sshkey",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/callback",
+ "name": "tests/integration/targets/setup_sshkey/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/callback/__init__.py",
+ "name": "tests/integration/targets/setup_sshkey/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "972169dd7d4774a9f05a10e7b7a41046e4ca1c1461fb30dd828c98fec938684d",
"format": 1
},
{
- "name": "plugins/callback/aws_resource_actions.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "fb17984f9f244aba88f721c2df47ac6820e83992cde662e04bf4f2eab1a60629",
+ "name": "tests/integration/targets/setup_sshkey/files",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/__init__.py",
+ "name": "tests/integration/targets/setup_sshkey/files/ec2-fingerprint.py",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "fc12adf84deacbe3a14c4da31f5b7bbfcf57e4dc2cd7e4693e5a991a6efbaf3b",
"format": 1
},
{
- "name": "plugins/doc_fragments",
+ "name": "tests/integration/targets/lookup_aws_service_ip_ranges",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/doc_fragments/__init__.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
- "format": 1
- },
- {
- "name": "plugins/doc_fragments/ec2.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "683daff2b7de94f68d574e05ab3d5405a9d9fc672910f412b104cb326f648c11",
+ "name": "tests/integration/targets/lookup_aws_service_ip_ranges/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/doc_fragments/aws_region.py",
+ "name": "tests/integration/targets/lookup_aws_service_ip_ranges/tasks/main.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "074b3f366d8214f956b0aff167e9940e08ab7fc2f697815eff50021069a8b708",
+ "chksum_sha256": "90136ad14bbe3df78e8033543d00baecbe85d7768592a0dd2e5abc7b19402197",
"format": 1
},
{
- "name": "plugins/doc_fragments/aws_credentials.py",
+ "name": "tests/integration/targets/lookup_aws_service_ip_ranges/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5bf58fccfb29994200623e8e2122544477c3e649b1527fd6fb683e3e90b3de15",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "plugins/doc_fragments/aws.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "8f8d798285451b4e66673c224afb982e7a32b93aa0276c936efe688b0a81e2d6",
+ "name": "tests/integration/targets/setup_remote_tmp_dir",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/lookup",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/lookup/aws_ssm.py",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/default-cleanup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "71e33727b0fc81427fc3bb4fe2f8017b325df215985e8c8336e0b705396711cb",
+ "chksum_sha256": "e273324ab90d72180a971d99b9ab69f08689c8be2e6adb991154fc294cf1056e",
"format": 1
},
{
- "name": "plugins/lookup/aws_service_ip_ranges.py",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/windows.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fe9487740f1766953f258f58b84c8545d66a2b28a6bce60dc00c5873b2d41335",
+ "chksum_sha256": "e29ee6a8db94d6de88c8458762f594f05d906f454f7c9977fd618d52b09e52f0",
"format": 1
},
{
- "name": "plugins/lookup/__init__.py",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/windows-cleanup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "3fd85bd6c3cf51c061eb221197d5653e5da0e101543b3c037f5066d6c73b1501",
"format": 1
},
{
- "name": "plugins/lookup/aws_secret.py",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0b7aab7e24bf4f2ed2afe77e686107bf4dbb39c2b7f73311f4882c9bc2d97b12",
+ "chksum_sha256": "766ab141899717320ba54e2bb1a6ba8cbc3cc7642d0023670154b49981ed1a91",
"format": 1
},
{
- "name": "plugins/lookup/aws_account_attribute.py",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/tasks/default.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "32bfed408489b9f460299f822a2ba729a00e5373d6ceedc4ac806e00aa03f3d1",
+ "chksum_sha256": "2441ac1753320d2cd3bea299c160540e6ae31739ed235923ca478284d1fcfe09",
"format": 1
},
{
- "name": "plugins/modules",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/handlers",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/aws_caller_info.py",
+ "name": "tests/integration/targets/setup_remote_tmp_dir/handlers/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f45eb7a2af067331d6b8c8cec02c893c43a6bfba53ecb9eacb07478574dd0100",
+ "chksum_sha256": "050157a29c48915cf220b3cdcf5a032e53e359bdc4a210cd457c4836e8e32a4d",
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_net_info.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "56dfd55fda81033e5b2f3e2d6dab66fdb7a6f8ad375f862c1722d7f707560110",
+ "name": "tests/integration/targets/s3_bucket",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_net_facts.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "56dfd55fda81033e5b2f3e2d6dab66fdb7a6f8ad375f862c1722d7f707560110",
+ "name": "tests/integration/targets/s3_bucket/roles",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_snapshot_facts.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "ce3017b599fe959d300bbe4e306bb62c10ff007b5b814facfd5dbae7fcc62289",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_group.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "cb98bfeffcd635bf69dd1a05d823f37f567f8a41d50dbf9e584634fbaba78f9d",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_subnet.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/tags.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e61c79af32d17b83f640fc25f9a409aaddf40c197c5f9dfd43aabb211784aa96",
+ "chksum_sha256": "bc5581c40a96552645a5d3f77e55a4bb85519fa0b6cc03835bdad7df55425e82",
"format": 1
},
{
- "name": "plugins/modules/aws_caller_facts.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/missing.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f45eb7a2af067331d6b8c8cec02c893c43a6bfba53ecb9eacb07478574dd0100",
+ "chksum_sha256": "f4cb3a405fb533cb08dc3e92afa5e21aa5178d14fc16b76397002075bf399a4b",
"format": 1
},
{
- "name": "plugins/modules/ec2_eni.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/complex.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "815ac07f781a9cd6168e4d7453db3466fb37bf66bd3d43b9f54f0d4fdaf8d0a8",
+ "chksum_sha256": "887d2fd20c81de9876e50fd1654a663ac3ab2df987f1d1d4d0d94bd23db8c924",
"format": 1
},
{
- "name": "plugins/modules/ec2_metadata_facts.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/simple.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3db0c55c40fc171d8110e1ce8d690d308d64d97725281071777321c678adbad6",
+ "chksum_sha256": "9a686815fd35ecbea7a1310198d9ff2173f73e6451737d3dcf5888d3a84ba140",
"format": 1
},
{
- "name": "plugins/modules/ec2_snapshot_info.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/dotted.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ce3017b599fe959d300bbe4e306bb62c10ff007b5b814facfd5dbae7fcc62289",
+ "chksum_sha256": "5fbd6cf43ff040ece99a8bda5b6a19f0db00d6a6255355d9350000554b513a15",
"format": 1
},
{
- "name": "plugins/modules/ec2_tag.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/ownership_controls.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3537fbfed0e9ea8b3b9da679437324064c55ccf45210675fc1ee5c4d83b60387",
+ "chksum_sha256": "b479991da57d53b582758f9a815619ce0937a00fd9ce027b7dc3608e34c59a2a",
"format": 1
},
{
- "name": "plugins/modules/__init__.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_kms.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "chksum_sha256": "c81c3971b2ad6205e1058ab38c9ef0b7dfb7a3f1c6a62e8f2f5f535fc2881f29",
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_dhcp_option.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "7355c0d092bf01518a6bf9e3d39d27a7f78fbb7a8de9e28744642ea14c0ff154",
+ "chksum_sha256": "eb329e1232fcd539f96bda674734113096dac7d481948b0cec7cb375866ce8db",
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_dhcp_option_info.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/public_access.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f286b7314d1b6017461703c013d1afa76ead3030e29b4c66042edda2a2cfa2aa",
+ "chksum_sha256": "6579b6d24a454acc95d6edace268c2140286d5b8f8403428d417c551aa77461b",
"format": 1
},
{
- "name": "plugins/modules/s3_bucket.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_sse.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3944789488234d17ef76ad4dae823950860e232396df6e9ab56472bc7ead3119",
+ "chksum_sha256": "f084f0fc7fa65b45b656f49d9402ac28720cce56a5864ac5a5ab14eaa8af7799",
"format": 1
},
{
- "name": "plugins/modules/ec2.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "abbe31a6c886d63793dd82a0ec71032e0b0037d356c67f513a0376b89722a2d0",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_dhcp_option_facts.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f286b7314d1b6017461703c013d1afa76ead3030e29b4c66042edda2a2cfa2aa",
+ "chksum_sha256": "288d3d5155f39590c6556ae4e649721eb899fa0997716d01ec7387a45026ea82",
"format": 1
},
{
- "name": "plugins/modules/ec2_snapshot.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "58e75d360ccdb312493bc441a85f4b240a14b1ac4e46c2d408bca3a161849e49",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/templates",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_ami_facts.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy-updated.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8bee0c13d58583c1e6078010dcaeaa559637130eca8bcb84cb2fc795040f2a17",
+ "chksum_sha256": "90814034e9ea0322b97a562c269a1fcb7b6f9e7534fb50bcbfd10d839b0dcf81",
"format": 1
},
{
- "name": "plugins/modules/ec2_tag_info.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "67497c53e61d2212be9619396b21a98a89d4dd22f90869ed93ee2dd1c99bd713",
+ "chksum_sha256": "7b9d1d9f3c5f7bc6b8816ac3ae16f19c9784dbb01d2a080efcd5936ef25518ee",
"format": 1
},
{
- "name": "plugins/modules/aws_az_info.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "a477e43b4e0c7fde844cb605fe9643a293233d8678fdb0bf0e493fbbae057a43",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_ami_info.py",
+ "name": "tests/integration/targets/s3_bucket/roles/s3_bucket/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8bee0c13d58583c1e6078010dcaeaa559637130eca8bcb84cb2fc795040f2a17",
+ "chksum_sha256": "17d1ee3af0799937fea09c67d39b2fa6db3011eed3a66b35a1efecfd37e2f5eb",
"format": 1
},
{
- "name": "plugins/modules/aws_s3.py",
+ "name": "tests/integration/targets/s3_bucket/runme.sh",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c165045391b608837bc4bba01cd5993f80a6ae97704271e395b1c940d7df9a25",
+ "chksum_sha256": "d2e53b13c18d9f57b9ac05cf209ab9ea0db765e0b8c4e0698e26747cef903d23",
"format": 1
},
{
- "name": "plugins/modules/cloudformation_facts.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "ab27d21bd4b89ff1b019f4438ac77b0a2fa469c470db9083ce5f55d5bc9ae7b6",
+ "name": "tests/integration/targets/s3_bucket/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_group_facts.py",
+ "name": "tests/integration/targets/s3_bucket/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9694a8587e2cbba1c83b7ced7df62062b4a6833fb48035ce8e2eefd0461f27ff",
+ "chksum_sha256": "bcac77de632a41972bbfe603cc12491709b771f00cdb31efcb554133a45c502b",
"format": 1
},
{
- "name": "plugins/modules/ec2_eni_facts.py",
+ "name": "tests/integration/targets/s3_bucket/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c389539d5f69574f7dd702978f171b4dcab9cc00bb55f109a7921cce79797ea2",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "plugins/modules/aws_az_facts.py",
+ "name": "tests/integration/targets/s3_bucket/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a477e43b4e0c7fde844cb605fe9643a293233d8678fdb0bf0e493fbbae057a43",
+ "chksum_sha256": "8395f20d527042f70de0e5a24a1db4d728bac43bcde06c3ac053c885774e0e6a",
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_subnet_info.py",
+ "name": "tests/integration/targets/s3_bucket/inventory",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "3440fa4cc5ae7453ce15205fd7493abb85d7dff0f2581729134a47476002e9ef",
+ "chksum_sha256": "e4371fbd7119bb5201b62d0ed1dd703b900aa231799b41108948e9b62ac0913a",
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_subnet_facts.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "3440fa4cc5ae7453ce15205fd7493abb85d7dff0f2581729134a47476002e9ef",
+ "name": "tests/integration/targets/cloudformation",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/cloudformation_info.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "ab27d21bd4b89ff1b019f4438ac77b0a2fa469c470db9083ce5f55d5bc9ae7b6",
+ "name": "tests/integration/targets/cloudformation/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_vol_info.py",
+ "name": "tests/integration/targets/cloudformation/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0249c52f742eae1eea1b7270879adec058e76f4f712704f153c5aeb8d5a409b9",
+ "chksum_sha256": "c62dd3974f3076aba5cd6ec8977a64af46c85b6f4ce833a60cf0b4cb8fc8a40f",
"format": 1
},
{
- "name": "plugins/modules/ec2_key.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "eed5c49c3f7eecd75ec80694361c4adac50aab6c59521807bf8dc0f4fe2a8c27",
+ "name": "tests/integration/targets/cloudformation/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_vpc_net.py",
+ "name": "tests/integration/targets/cloudformation/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "a64e34dfeea8cb49e4ebcbcf6fcb3c02e4f8920908394f5b7de2584b2dd8015d",
+ "chksum_sha256": "e1d851188d9e6d7d833aabae61c46f0f9421f9138c6b348905598866242259c8",
"format": 1
},
{
- "name": "plugins/modules/ec2_eni_info.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "c389539d5f69574f7dd702978f171b4dcab9cc00bb55f109a7921cce79797ea2",
+ "name": "tests/integration/targets/cloudformation/files",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_group_info.py",
+ "name": "tests/integration/targets/cloudformation/files/update_policy.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9694a8587e2cbba1c83b7ced7df62062b4a6833fb48035ce8e2eefd0461f27ff",
+ "chksum_sha256": "bcb41e725f7fae8be4356633beb391dd1870e344d626b105a3e2f14f3b3e5e96",
"format": 1
},
{
- "name": "plugins/modules/cloudformation.py",
+ "name": "tests/integration/targets/cloudformation/files/cf_template.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "dd7412b0c73e7658ed0aa0f887b1285a0760d2fb727ff36f78fbbcbf5ad73ac1",
+ "chksum_sha256": "5f612313fe9e8c40c55eba290f6af3b814a3702cf728a6c5630e24f0e8787fa8",
"format": 1
},
{
- "name": "plugins/modules/ec2_elb_lb.py",
+ "name": "tests/integration/targets/cloudformation/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "bf896ea51ccd4108c4763be0cb019e8de7909be8d2d39ed0969a6ca9c3691bdd",
+ "chksum_sha256": "28ee2ca3290c5220d7576cad86a78a42efb7a97df52a20521a36d520192c6e9c",
"format": 1
},
{
- "name": "plugins/modules/ec2_vol_facts.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "0249c52f742eae1eea1b7270879adec058e76f4f712704f153c5aeb8d5a409b9",
+ "name": "tests/integration/targets/cloudformation/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/modules/ec2_vol.py",
+ "name": "tests/integration/targets/cloudformation/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ea9dde5a884bdbdbdd3a3454d807e4b354ef6e8f927329aec53ce67b03945936",
+ "chksum_sha256": "343a3227698a485b984745e791f5e44ff8797a3b60fcd54d0a4641bb0369b012",
"format": 1
- },
- {
- "name": "plugins/modules/ec2_ami.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "d8530c92fcb3e19e7c479823686e0922886ad4c41112baef5ec27fce5653180f",
+ },
+ {
+ "name": "tests/integration/targets/ec2_snapshot",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils",
+ "name": "tests/integration/targets/ec2_snapshot/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/cloud.py",
+ "name": "tests/integration/targets/ec2_snapshot/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ad87b867f4993c317bfdd6d6f742127b93907055e4db451dd9b0730ac99aa56b",
+ "chksum_sha256": "46e7f01a2ddcec7f2045e4601028529d161106cd842e2c96db9b75b18c31faae",
"format": 1
},
{
- "name": "plugins/module_utils/compat",
+ "name": "tests/integration/targets/ec2_snapshot/meta",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/compat/_ipaddress.py",
+ "name": "tests/integration/targets/ec2_snapshot/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "62dc7ea84ea6a4a14512904aae8982c0792ec7fa7dd5e6793c4e609bcb95a41e",
+ "chksum_sha256": "e1d851188d9e6d7d833aabae61c46f0f9421f9138c6b348905598866242259c8",
"format": 1
},
{
- "name": "plugins/module_utils/elb_utils.py",
+ "name": "tests/integration/targets/ec2_snapshot/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "fdb692e5d99229f7bbbf7b7a8db6069c83a149d441124f013fad973b51fa036f",
+ "chksum_sha256": "0d795dbf72b8c1338bbdc7e386715c5f9f53eda9a5d43f61915e58c1d3847237",
"format": 1
},
{
- "name": "plugins/module_utils/__init__.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "name": "tests/integration/targets/ec2_snapshot/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/ec2.py",
+ "name": "tests/integration/targets/ec2_snapshot/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5fa393f0e66e2905f765317a35af6dfee006c6cf0d0b80476ed7737cc85e9acf",
+ "chksum_sha256": "873903f9abb784a3e395685d19806c065347dad6f1ace7bc67638e3e842692e9",
"format": 1
},
{
- "name": "plugins/module_utils/s3.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "17a249de004b14b26a443d6456c4416c4f23e281298772d8bd38c70508a754a4",
+ "name": "tests/integration/targets/ec2_vpc_endpoint",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/core.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "902eea1c213d664c0a339dd2803988ef7bf018c680f8cdaf64f1a58b006375b1",
+ "name": "tests/integration/targets/ec2_vpc_endpoint/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/acm.py",
+ "name": "tests/integration/targets/ec2_vpc_endpoint/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "73614c7c2d701bafb3ae46f4c819cf4f3888efa92b39cee71e619beba24f446d",
+ "chksum_sha256": "bc782b40123262d4c71ff77085cb4697fb777e4f2e799327d7941c6f14306dcd",
"format": 1
},
{
- "name": "plugins/module_utils/iam.py",
+ "name": "tests/integration/targets/ec2_vpc_endpoint/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0d0f188188df4a059e3f855c2cab85a6eb5d6e908c8e8c18410b5853b48d1f86",
+ "chksum_sha256": "3d93249274841baf16f40cd81a2d5d45998657b730dc1d403c58b63c70db320c",
"format": 1
},
{
- "name": "plugins/module_utils/rds.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "6664907704c27f96f29cfc5b92407a8491be46f56eeda8459f774ba6883b8b70",
+ "name": "tests/integration/targets/ec2_vpc_endpoint/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/elbv2.py",
+ "name": "tests/integration/targets/ec2_vpc_endpoint/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "35e920ec198c3f398ec331a6c112246404bfdcd697c2040d9ba70e8b944582d3",
+ "chksum_sha256": "55bde63e4f9fd46da09e93ba507f4f32495ea895bee4d441bc50500a81071c12",
"format": 1
},
{
- "name": "plugins/module_utils/cloudfront_facts.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "99f80f9bf04ccd268a4819e93121579e43eeea0c08240a7a5b8ab0f91a9bda26",
+ "name": "tests/integration/targets/aws_az_info",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/waiters.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "e06146d01752c5a5c0ca89be27df1f2ff2c0230ba3bd5b16d50e826eaedf6bfc",
+ "name": "tests/integration/targets/aws_az_info/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/direct_connect.py",
+ "name": "tests/integration/targets/aws_az_info/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "57e6f1bbf32388e3864419baa48bc57d509f56dccbb8bbec0787bcdc4c54dcb6",
+ "chksum_sha256": "4553d6453cd93e7745083c40410127744ba59a7934c07e39913ef6b9c7a5ae2a",
"format": 1
},
{
- "name": "plugins/module_utils/batch.py",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "1ee897b11875f13f8dd12d245d0c4d680a95830886a489990137d0af2fb5d0db",
+ "name": "tests/integration/targets/aws_az_info/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "plugins/module_utils/waf.py",
+ "name": "tests/integration/targets/aws_az_info/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "643bc1da71db0c08de7e13b645330355961c20f72b021e8e9c1e682c5530674a",
+ "chksum_sha256": "1e8d632f9db7209967c5b2f6d734bede09841acc7b898dafc19f31c72cee9929",
"format": 1
},
{
- "name": "plugins/module_utils/urls.py",
+ "name": "tests/integration/targets/aws_az_info/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "089b532522cfff202a7da5cedb5d6e2e46932bbd273094b6c69504b4b4e21262",
+ "chksum_sha256": "7289894f07e47a82972994ae89cfaef863f54310114e1c5d7122f7fc08bc19fe",
"format": 1
},
{
- "name": "CONTRIBUTING.md",
+ "name": "tests/integration/targets/aws_az_info/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1f198b56472d1ec885aa9ddffe98b92c19966ed7af81037aef360c08b0b0eb95",
+ "chksum_sha256": "47d7f0170663266b9c80b357a113128c721f64f7782736c399471404ef6170be",
"format": 1
},
{
- "name": "shippable.yml",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "3152e7f102722d8f2c9b2856be30a4b574b0ca2e7debb4ce1c908d459323452c",
+ "name": "tests/integration/targets/setup_ec2",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs",
+ "name": "tests/integration/targets/setup_ec2/tasks",
"ftype": "dir",
"chksum_type": null,
"chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.aws_ssm_lookup.rst",
+ "name": "tests/integration/targets/setup_ec2/tasks/common.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "e9b2055913079f8468f2282f8e1b3755bbbcb57ca43717e040e10f209d86de79",
+ "chksum_sha256": "386a6e24ee18825f0103bdbe6690b11da5beb2b2bdc53e842f50ce9b3c3fae69",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vpc_net_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "6bc21993ee32a26225c4d22e7a6143ae85472254151e881d8ca60852254b5a18",
+ "name": "tests/integration/targets/setup_ec2/vars",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_snapshot_module.rst",
+ "name": "tests/integration/targets/setup_ec2/vars/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "8246ec5ee79ac35d9806af060bd9464da262e4dab5fc8054b7a0dff65dc33f60",
+ "chksum_sha256": "03b695815f4b4833868e52a22f310bcac3be7e6e40e10ed1dcb2a7c9b13556e4",
"format": 1
},
{
- "name": "docs/amazon.aws.aws_ec2_inventory.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "db5eb2459ff2ccd75ba0beac969af1fa55ecad183615ca8007a3e2da89ab2fe5",
+ "name": "tests/integration/targets/setup_ec2/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_metadata_facts_module.rst",
+ "name": "tests/integration/targets/setup_ec2/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "c22741be5961b62e3c79176a14b987cab97bed2e3c7d3b7303c133d547bd0c4e",
+ "chksum_sha256": "c11c67f4b66a693d3183caaee0abd97d2f02694e5998c0040b5f381dc68eeb3c",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_ami_info_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "c8df5c59c4f0fbcd17aaeee222925d1083a649fc08dc6b58984929e70094a7a2",
+ "name": "tests/integration/targets/ec2_spot_instance",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.aws_az_info_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "2e1cae722c27d6a706e2a599c5e215a0286bdcbd7c1301e2152c32617db692e0",
+ "name": "tests/integration/targets/ec2_spot_instance/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.aws_account_attribute_lookup.rst",
+ "name": "tests/integration/targets/ec2_spot_instance/tasks/main.yaml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "53d15bdf65a13e933f96641b058a530ccc963e295e1305c0ad10592b14d15d9a",
+ "chksum_sha256": "1fc35832d504f7e01264c2a0fbc89ffb3aee826d48dcc840750e4bb50cc58f4d",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_snapshot_info_module.rst",
+ "name": "tests/integration/targets/ec2_spot_instance/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "0ebab7640ffe4d3ed05e74ddc7edb632d48273cdf136aedfd10ca936b8742b2c",
+ "chksum_sha256": "50cbafbb10bd16de32679f4ccf37c9ba04750c01efaa766e3bb711beae548fd7",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "ea67b7477c2261df4a0828a1b1760249888beb87716849a39feb541c492774f3",
+ "name": "tests/integration/targets/ec2_spot_instance/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst",
+ "name": "tests/integration/targets/ec2_spot_instance/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "32a5d692477c88676063d89391a0fb2f084b84d7cadac04f554aeb2bc32919a1",
+ "chksum_sha256": "e63089a34e6352d80cece0c5551d5a43c560295facbb549e9277c2c3e113afa2",
"format": 1
},
{
- "name": "docs/amazon.aws.cloudformation_info_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "f8fc6538aae32219a106f1fb5db0998452274aab9c9d12512326844492fc783f",
+ "name": "tests/integration/targets/prepare_tests",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vpc_subnet_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "78e7e8a938ef2b4ecf5f365d807aa7cbfe15923d4fb3db216a246bde9c274065",
+ "name": "tests/integration/targets/prepare_tests/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_eni_info_module.rst",
+ "name": "tests/integration/targets/prepare_tests/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "accc2fe9512253949459941073c8d44a3949ae6f8c1a742c35b113f11b49fa1c",
+ "chksum_sha256": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vpc_net_info_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "c3652ba58c120b4219e3397ab213223d1c4f0d1928e5f9b72f1f22bdda49bb96",
+ "name": "tests/integration/targets/ec2_vpc_net",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.s3_bucket_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "467899c6faf67340fa25b4fd2bbc260056b24b9ff9325e17df902134cb9851a6",
+ "name": "tests/integration/targets/ec2_vpc_net/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vol_module.rst",
+ "name": "tests/integration/targets/ec2_vpc_net/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "1a20f1fe6fbd7d0c4b19946cb76e7b7c2a9d2f8e72c32cd8c4da15dd5028de36",
+ "chksum_sha256": "4771e4945960efe4ceca39ecbe73e6aa144e28d12d56276398673d0079aa5446",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_tag_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "c05f33f615a5c0b6f4da3b719f90754a23c34eb183c168445d992d13969b11eb",
+ "name": "tests/integration/targets/ec2_vpc_net/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_key_module.rst",
+ "name": "tests/integration/targets/ec2_vpc_net/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "83d55df8ebc60be6ffffc0ae5e8457832b205952af1a934fa0f1d55680aa9aad",
+ "chksum_sha256": "addf553f8dde7a6ec25f08591c282eee5491f9815963c51285a4c00294af2863",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vpc_dhcp_option_module.rst",
+ "name": "tests/integration/targets/ec2_vpc_net/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "f420de62deb21fee05c24a47302a771730fbdfb5c25973b4fd00228cbee43641",
+ "chksum_sha256": "a0207940db8ca0d920265404a52b42af659833e58f0ec731f774cd1ddc23f45b",
"format": 1
},
{
- "name": "docs/amazon.aws.aws_secret_lookup.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "1e2e0bf7e3a665db01864482300b7fc85651ed82dc26263a3fc15fb0f756b422",
+ "name": "tests/integration/targets/ec2_vpc_net/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_elb_lb_module.rst",
+ "name": "tests/integration/targets/ec2_vpc_net/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9861a326cd3f884701fe0cd6159b8ceebafde391b9146e605899ad703910f4c5",
+ "chksum_sha256": "b2f7ee850fd4a897e2bb408586fde1de3fcf1e73159f7d2fdb4451621c413fe0",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_group_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "5945785f0976968d8c1f320c7b21bba765a8308fb592634583b6f4915cfb4a91",
+ "name": "tests/integration/targets/setup_botocore_pip",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_ami_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "1f4ff011411053b10c2ec71455aa800e9141d5580b7df3b214838d4f1da8be66",
+ "name": "tests/integration/targets/setup_botocore_pip/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_eni_module.rst",
+ "name": "tests/integration/targets/setup_botocore_pip/tasks/cleanup.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "d7dffcb948d9d11cf4487053148b43b720aefa1230af5b62ea5c56720a637746",
+ "chksum_sha256": "f43b9a2bb665a9791c75ed1168e318b4b008bb952a5332ec347fc292f8c23700",
"format": 1
},
{
- "name": "docs/amazon.aws.aws_rds_inventory.rst",
+ "name": "tests/integration/targets/setup_botocore_pip/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "9b5df140612c2290b21a544b15facbd54d3fe40a121e1799c0f2d105b26729b5",
+ "chksum_sha256": "2fb9ea2a450340f7499f8a7962003196054e9f05d0d7ba691abe1ba9d695d64b",
"format": 1
},
{
- "name": "docs/amazon.aws.aws_caller_info_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "73f829c28ed25b4c0909fbd6e5bb9f3cfd6d75737e4b90347f77d50ea6b3d009",
+ "name": "tests/integration/targets/setup_botocore_pip/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_group_info_module.rst",
+ "name": "tests/integration/targets/setup_botocore_pip/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "5a80116eb55ae573b578be4f1b028b13e638458ec3e11acc2e46c8dfb3d65fb6",
+ "chksum_sha256": "1b37de146db7afd2ca4d91c09819d242760c8bf8421f9c7a9d8ec63466e05f6d",
"format": 1
},
{
- "name": "docs/amazon.aws.aws_s3_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "74bcc24f1d36a7c4193abb1fa0b44e150f912e0bcfebb14c61bfe6ba6d5647c4",
+ "name": "tests/integration/targets/setup_botocore_pip/handlers",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.cloudformation_module.rst",
+ "name": "tests/integration/targets/setup_botocore_pip/handlers/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "dbb88dd63627620646d44104eb5447637a2e69c8f8746864f5b56dacd59228ec",
+ "chksum_sha256": "b7ddacbb461ad683fce34906dc092a378c637e4cb58ad3cd7b14db4bcffa8d6f",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vol_info_module.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "77e59aae0b5940e94bff957dba07c8c0fdfddaecf7f38c145e1dc60fc80f5d0c",
+ "name": "tests/integration/targets/ec2_vpc_subnet",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.aws_service_ip_ranges_lookup.rst",
- "ftype": "file",
- "chksum_type": "sha256",
- "chksum_sha256": "d092d1b46d80e6055019e854daaabd36617dc661bc261d55424c55a1db1eebf4",
+ "name": "tests/integration/targets/ec2_vpc_subnet/tasks",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_tag_info_module.rst",
+ "name": "tests/integration/targets/ec2_vpc_subnet/tasks/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "758fd967b3273db8d97861e0bc8d5c1a1ef308ba872a53930e71b2a1df145f69",
+ "chksum_sha256": "de531c6e79b36bf46dbef863e0d2220538cc4b623f72a308c534673a02a7c87f",
"format": 1
},
{
- "name": "docs/amazon.aws.ec2_vpc_subnet_info_module.rst",
+ "name": "tests/integration/targets/ec2_vpc_subnet/meta",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_vpc_subnet/meta/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "79749d36e5272eea214c652acd124a3cc5b9d986c1d931bc0adacb6a07208032",
+ "chksum_sha256": "513205535169d91c98bcdbeab464e21787b6d9ae122c3eaebb1933591d615715",
"format": 1
},
{
- "name": "test-requirements.txt",
+ "name": "tests/integration/targets/ec2_vpc_subnet/aliases",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "ed65844940d8cc931552ae4bcc407322cae5d9f5b615753afc3cff0f731bf54a",
+ "chksum_sha256": "0cab1bb4ce6a89a690d07d5b692bd0ddb5ef2430b036bd10566995661e454496",
"format": 1
},
{
- "name": "CHANGELOG.rst",
+ "name": "tests/integration/targets/ec2_vpc_subnet/defaults",
+ "ftype": "dir",
+ "chksum_type": null,
+ "chksum_sha256": null,
+ "format": 1
+ },
+ {
+ "name": "tests/integration/targets/ec2_vpc_subnet/defaults/main.yml",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "12b7c641707604ccbb1360c41d763c2d4d9a1ea344fd12de0140fb7ef5693bab",
+ "chksum_sha256": "c1e9227cad9cc7c427615ec8e92e428d6f7a84c5620f70cfc8f12f8995306be0",
"format": 1
}
],
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/.github/BOTMETA.yml ansible-5.2.0/ansible_collections/amazon/aws/.github/BOTMETA.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/.github/BOTMETA.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/.github/BOTMETA.yml 2021-11-12 18:13:53.000000000 +0000
@@ -29,7 +29,7 @@
$modules/:
authors: wimnat
maintainers: $team_aws
- ignore: erydo nadirollo seiffert tedder
+ ignore: erydo nadirollo seiffert tedder wimnat
labels: modules
$modules/_aws_az_facts.py:
authors: Sodki
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/bug_report.yml ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/bug_report.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/bug_report.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/bug_report.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,161 @@
+---
+name: Bug report
+description: Create a report to help us improve
+
+body:
+- type: markdown
+ attributes:
+ value: |
+ ⚠
+ Verify first that your issue is not [already reported on GitHub][issue search].
+ Where possible also test if the latest release and main branch are affected too.
+ *Complete **all** sections as described, this form is processed automatically.*
+
+ [issue search]: https://github.com/ansible-collections/amazon.aws/search?q=is%3Aissue&type=issues
+
+- type: textarea
+ attributes:
+ label: Summary
+ description: |
+ Explain the problem briefly below.
+ placeholder: >-
+ When I try to do X with the collection from the main branch on GitHub, Y
+ breaks in a way Z under the env E. Here are all the details I know
+ about this problem...
+ validations:
+ required: true
+
+- type: dropdown
+ attributes:
+ label: Issue Type
+ # FIXME: Once GitHub allows defining the default choice, update this
+ options:
+ - Bug Report
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ # For smaller collections we could use a multi-select and hardcode the list
+ # May generate this list via GitHub action and walking files under https://github.com/ansible-collections/community.general/tree/main/plugins
+ # Select from list, filter as you type (`mysql` would only show the 3 mysql components)
+ # OR freeform - doesn't seem to be supported in adaptivecards
+ label: Component Name
+ description: >-
+ Write the short name of the module or plugin below,
+ *use your best guess if unsure*.
+ placeholder: ec2_instance, ec2_security_group
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: Ansible Version
+ description: >-
+ Paste verbatim output from `ansible --version` between
+ tripple backticks.
+ value: |
+ ```console (paste below)
+ $ ansible --version
+
+ ```
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: Collection Versions
+ description: >-
+ Paste verbatim output from `ansible-galaxy collection list` between
+ tripple backticks.
+ value: |
+ ```console (paste below)
+ $ ansible-galaxy collection list
+ ```
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: AWS SDK versions
+ description: >-
+ The AWS modules depend heavily on the Amazon AWS SDKs which are regularly updated.
+ Paste verbatim output from `pip show boto boto3 botocore` between quotes
+ value: |
+ ```console (paste below)
+ $ pip show boto boto3 botocore
+ ```
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: Configuration
+ description: >-
+ If this issue has an example piece of YAML that can help to reproduce this problem, please provide it.
+ This can be a piece of YAML from, e.g., an automation, script, scene or configuration.
+
+ Paste verbatim output from `ansible-config dump --only-changed` between quotes
+ value: |
+ ```console (paste below)
+ $ ansible-config dump --only-changed
+
+ ```
+
+- type: textarea
+ attributes:
+ label: OS / Environment
+ description: >-
+ Provide all relevant information below, e.g. target OS versions,
+ network device firmware, etc.
+ placeholder: RHEL 8, CentOS Stream etc.
+ validations:
+ required: false
+
+- type: textarea
+ attributes:
+ label: Steps to Reproduce
+ description: |
+ Describe exactly how to reproduce the problem, using a minimal test-case. It would *really* help us understand your problem if you could also paste any playbooks, configs and commands you used.
+
+ **HINT:** You can paste https://gist.github.com links for larger files.
+ value: |
+
+ ```yaml (paste below)
+
+ ```
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: Expected Results
+ description: >-
+ Describe what you expected to happen when running the steps above.
+ placeholder: >-
+ I expected X to happen because I assumed Y.
+ that it did not.
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: Actual Results
+ description: |
+ Describe what actually happened. If possible run with extra verbosity (`-vvvv`).
+
+ Paste verbatim command output between quotes.
+ value: |
+ ```console (paste below)
+
+ ```
+
+- type: checkboxes
+ attributes:
+ label: Code of Conduct
+ description: |
+ Read the [Ansible Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_form--ansible-collections) first.
+ options:
+ - label: I agree to follow the Ansible Code of Conduct
+ required: true
+...
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/ci_report.yml ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/ci_report.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/ci_report.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/ci_report.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,76 @@
+---
+name: CI Bug Report
+description: Create a report to help us improve our CI
+
+body:
+- type: markdown
+ attributes:
+ value: |
+ ⚠
+ Verify first that your issue is not [already reported on GitHub][issue search].
+ *Complete **all** sections as described, this form is processed automatically.*
+
+ [issue search]: https://github.com/ansible-collections/amazon.aws/search?q=is%3Aissue&type=issues
+
+- type: textarea
+ attributes:
+ label: Summary
+ description: |
+ Describe the new issue briefly below.
+ placeholder: >-
+ I opened a Pull Request and CI failed to run. I believe this is due to a problem with the CI rather than my code.
+ validations:
+ required: true
+
+- type: dropdown
+ attributes:
+ label: Issue Type
+ # FIXME: Once GitHub allows defining the default choice, update this
+ options:
+ - CI Bug Report
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: CI Jobs
+ description: >-
+ Please provide a link to the failed CI tests.
+ placeholder: https://dashboard.zuul.ansible.com/t/ansible/buildset/be956faa49d84e43bc860d0cd3dc8503
+ validations:
+ required: false
+
+- type: textarea
+ attributes:
+ label: Pull Request
+ description: >-
+ Please provide a link to the Pull Request where the tests are failing
+ placeholder: https://github.com/ansible-collections/amazon.aws/runs/3040421733
+ validations:
+ required: false
+
+- type: textarea
+ attributes:
+ label: Additional Information
+ description: |
+ Please provide as much information as possible to help us understand the issue being reported.
+ Where possible, please include the specific errors that you're seeing.
+
+ **HINT:** You can paste https://gist.github.com links for larger files.
+ value: |
+
+ ```yaml (paste below)
+
+ ```
+ validations:
+ required: false
+
+- type: checkboxes
+ attributes:
+ label: Code of Conduct
+ description: |
+ Read the [Ansible Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_form--ansible-collections) first.
+ options:
+ - label: I agree to follow the Ansible Code of Conduct
+ required: true
+...
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/config.yml ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/config.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/config.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/config.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,27 @@
+---
+# Ref: https://help.github.com/en/github/building-a-strong-community/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
+blank_issues_enabled: false # default: true
+contact_links:
+- name: Security bug report
+ url: https://docs.ansible.com/ansible-core/devel/community/reporting_bugs_and_features.html?utm_medium=github&utm_source=issue_template_chooser_ansible_collections
+ about: |
+ Please learn how to report security vulnerabilities here.
+
+ For all security related bugs, email security@ansible.com
+ instead of using this issue tracker and you will receive
+ a prompt response.
+
+ For more information, see
+ https://docs.ansible.com/ansible/latest/community/reporting_bugs_and_features.html
+- name: Ansible Code of Conduct
+ url: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_template_chooser_ansible_collections
+ about: Be nice to other members of the community.
+- name: Talks to the community
+ url: https://docs.ansible.com/ansible/latest/community/communication.html?utm_medium=github&utm_source=issue_template_chooser#mailing-list-information
+ about: Please ask and answer usage questions here
+- name: Working groups
+ url: https://github.com/ansible/community/wiki
+ about: Interested in improving a specific area? Become a part of a working group!
+- name: For Enterprise
+ url: https://www.ansible.com/products/engine?utm_medium=github&utm_source=issue_template_chooser_ansible_collections
+ about: Red Hat offers support for the Ansible Automation Platform
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/documentation_report.yml ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/documentation_report.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/documentation_report.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/documentation_report.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,130 @@
+---
+name: Documentation Report
+description: Ask us about docs
+# NOTE: issue body is enabled to allow screenshots
+
+body:
+- type: markdown
+ attributes:
+ value: |
+ ⚠
+ Verify first that your issue is not [already reported on GitHub][issue search].
+ Where possible also test if the latest release and main branch are affected too.
+ *Complete **all** sections as described, this form is processed automatically.*
+
+ [issue search]: https://github.com/ansible-collections/amazon.aws/search?q=is%3Aissue&type=issues
+
+- type: textarea
+ attributes:
+ label: Summary
+ description: |
+ Explain the problem briefly below, add suggestions to wording or structure.
+
+ **HINT:** Did you know the documentation has an `Edit on GitHub` link on every page?
+ placeholder: >-
+ I was reading the Collection documentation of version X and I'm having
+ problems understanding Y. It would be very helpful if that got
+ rephrased as Z.
+ validations:
+ required: true
+
+- type: dropdown
+ attributes:
+ label: Issue Type
+ # FIXME: Once GitHub allows defining the default choice, update this
+ options:
+ - Documentation Report
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ # For smaller collections we could use a multi-select and hardcode the list
+ # May generate this list via GitHub action and walking files under https://github.com/ansible-collections/community.general/tree/main/plugins
+ # Select from list, filter as you type (`mysql` would only show the 3 mysql components)
+ # OR freeform - doesn't seem to be supported in adaptivecards
+ label: Component Name
+ description: >-
+ Write the short name of the rst file, module, plugin or task below,
+ *use your best guess if unsure*.
+ placeholder: ec2_instance, ec2_security_group
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: Ansible Version
+ description: >-
+ Paste verbatim output from `ansible --version` between
+ tripple backticks.
+ value: |
+ ```console (paste below)
+ $ ansible --version
+
+ ```
+ validations:
+ required: false
+
+- type: textarea
+ attributes:
+ label: Collection Versions
+ description: >-
+ Paste verbatim output from `ansible-galaxy collection list` between
+ tripple backticks.
+ value: |
+ ```console (paste below)
+ $ ansible-galaxy collection list
+ ```
+ validations:
+ required: false
+
+- type: textarea
+ attributes:
+ label: Configuration
+ description: >-
+ If this issue has an example piece of YAML that can help to reproduce this problem, please provide it.
+ This can be a piece of YAML from, e.g., an automation, script, scene or configuration.
+
+ Paste verbatim output from `ansible-config dump --only-changed` between quotes
+ value: |
+ ```console (paste below)
+ $ ansible-config dump --only-changed
+
+ ```
+ validations:
+ required: false
+
+- type: textarea
+ attributes:
+ label: OS / Environment
+ description: >-
+ Provide all relevant information below, e.g. OS version,
+ browser, etc.
+ placeholder: RHEL 8, Firefox etc.
+ validations:
+ required: false
+
+- type: textarea
+ attributes:
+ label: Additional Information
+ description: |
+ Describe how this improves the documentation, e.g. before/after situation or screenshots.
+
+ **Tip:** It's not possible to upload the screenshot via this field directly but you can use the last textarea in this form to attach them.
+
+ **HINT:** You can paste https://gist.github.com links for larger files.
+ placeholder: >-
+ When the improvement is applied, it makes it more straightforward
+ to understand X.
+ validations:
+ required: false
+
+- type: checkboxes
+ attributes:
+ label: Code of Conduct
+ description: |
+ Read the [Ansible Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_form--ansible-collections) first.
+ options:
+ - label: I agree to follow the Ansible Code of Conduct
+ required: true
+...
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/feature_request.yml ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/feature_request.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/feature_request.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/.github/ISSUE_TEMPLATE/feature_request.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,74 @@
+---
+name: Feature request
+description: Suggest an idea for this project
+
+body:
+- type: markdown
+ attributes:
+ value: |
+ ⚠
+ Verify first that your issue is not [already reported on GitHub][issue search].
+ Where possible also test if the latest release and main branch are affected too.
+ *Complete **all** sections as described, this form is processed automatically.*
+
+ [issue search]: https://github.com/ansible-collections/amazon.aws/search?q=is%3Aissue&type=issues
+
+- type: textarea
+ attributes:
+ label: Summary
+ description: |
+ Describe the new feature/improvement briefly below.
+ placeholder: >-
+ I am trying to do X with the collection from the main branch on GitHub and
+ I think that implementing a feature Y would be very helpful for me and
+ every other user of amazon.aws because of Z.
+ validations:
+ required: true
+
+- type: dropdown
+ attributes:
+ label: Issue Type
+ # FIXME: Once GitHub allows defining the default choice, update this
+ options:
+ - Feature Idea
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ # For smaller collections we could use a multi-select and hardcode the list
+ # May generate this list via GitHub action and walking files under https://github.com/ansible-collections/community.general/tree/main/plugins
+ # Select from list, filter as you type (`mysql` would only show the 3 mysql components)
+ # OR freeform - doesn't seem to be supported in adaptivecards
+ label: Component Name
+ description: >-
+ Write the short name of the module or plugin below,
+ *use your best guess if unsure*.
+ placeholder: ec2_instance, ec2_security_group
+ validations:
+ required: true
+
+- type: textarea
+ attributes:
+ label: Additional Information
+ description: |
+ Describe how the feature would be used, why it is needed and what it would solve.
+
+ **HINT:** You can paste https://gist.github.com links for larger files.
+ value: |
+
+ ```yaml (paste below)
+
+ ```
+ validations:
+ required: false
+
+- type: checkboxes
+ attributes:
+ label: Code of Conduct
+ description: |
+ Read the [Ansible Code of Conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html?utm_medium=github&utm_source=issue_form--ansible-collections) first.
+ options:
+ - label: I agree to follow the Ansible Code of Conduct
+ required: true
+...
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/MANIFEST.json ansible-5.2.0/ansible_collections/amazon/aws/MANIFEST.json
--- ansible-4.10.0/ansible_collections/amazon/aws/MANIFEST.json 2021-09-16 17:32:03.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/MANIFEST.json 2021-11-12 18:17:28.000000000 +0000
@@ -2,7 +2,7 @@
"collection_info": {
"namespace": "amazon",
"name": "aws",
- "version": "1.5.1",
+ "version": "2.1.0",
"authors": [
"Ansible (https://github.com/ansible)"
],
@@ -25,7 +25,7 @@
"name": "FILES.json",
"ftype": "file",
"chksum_type": "sha256",
- "chksum_sha256": "28f151a1d92d3cccbbeb914c7e3e87b9d868b8666c7480183add385bb27055aa",
+ "chksum_sha256": "856cfaae0cd417f5c3872c0636cf8a91c519e2d82f4af1ff939bd15073abe229",
"format": 1
},
"format": 1
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/meta/runtime.yml ansible-5.2.0/ansible_collections/amazon/aws/meta/runtime.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/meta/runtime.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/meta/runtime.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,120 +1,172 @@
----
requires_ansible: '>=2.9.10'
action_groups:
aws:
- - aws_s3
- - ec2
- - aws_secret
- - cloudfront_facts
- - iam
- - rds
- - ec2
- aws_az_facts
- - aws_caller_facts
- - cloudformation_facts
- - ec2_ami_facts
- - ec2_eni_facts
- - ec2_group_facts
- - ec2_snapshot_facts
- - ec2_vol_facts
- - ec2_vpc_dhcp_option_facts
- - ec2_vpc_net_facts
- - ec2_vpc_subnet_facts
- aws_az_info
+ - aws_caller_facts
- aws_caller_info
- aws_s3
+ - aws_s3
+ - aws_secret
- cloudformation
+ - cloudformation_facts
- cloudformation_info
+ - cloudfront_facts
+ - ec2
+ - ec2
- ec2
- ec2_ami
+ - ec2_ami_facts
- ec2_ami_info
- ec2_elb_lb
- ec2_eni
+ - ec2_eni_facts
- ec2_eni_info
- ec2_group
+ - ec2_group_facts
- ec2_group_info
+ - ec2_instance
+ - ec2_instance_facts
+ - ec2_instance_info
- ec2_key
- ec2_snapshot
+ - ec2_snapshot_facts
- ec2_snapshot_info
+ - ec2_spot_instance
+ - ec2_spot_instance_info
- ec2_tag
- ec2_tag_info
- ec2_vol
+ - ec2_vol_facts
- ec2_vol_info
- ec2_vpc_dhcp_option
+ - ec2_vpc_dhcp_option_facts
- ec2_vpc_dhcp_option_info
+ - ec2_vpc_endpoint
+ - ec2_vpc_endpoint_facts
+ - ec2_vpc_endpoint_info
+ - ec2_vpc_endpoint_service_info
+ - ec2_vpc_igw
+ - ec2_vpc_igw_facts
+ - ec2_vpc_igw_info
+ - ec2_vpc_nat_gateway
+ - ec2_vpc_nat_gateway_facts
+ - ec2_vpc_nat_gateway_info
- ec2_vpc_net
+ - ec2_vpc_net_facts
- ec2_vpc_net_info
+ - ec2_vpc_route_table
+ - ec2_vpc_route_table_facts
+ - ec2_vpc_route_table_info
- ec2_vpc_subnet
+ - ec2_vpc_subnet_facts
- ec2_vpc_subnet_info
+ - elb_classic_lb
+ - iam
+ - rds
- s3_bucket
-
plugin_routing:
modules:
aws_az_facts:
deprecation:
removal_date: 2022-06-01
warning_text: >-
- aws_az_facts was renamed in Ansible 2.9 to aws_az_info.
- Please update your tasks.
+ aws_az_facts was renamed in Ansible 2.9 to aws_az_info.
+ Please update your tasks.
aws_caller_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- aws_caller_facts was renamed in Ansible 2.9 to aws_caller_info.
- Please update your tasks.
+ aws_caller_facts was renamed in Ansible 2.9 to aws_caller_info.
+ Please update your tasks.
cloudformation_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- cloudformation_facts has been deprecated and will be removed.
- The cloudformation_info module returns the same information, but
- not as ansible_facts. See the module documentation for more
- information.
+ cloudformation_facts has been deprecated and will be removed.
+ The cloudformation_info module returns the same information, but
+ not as ansible_facts. See the module documentation for more
+ information.
+ ec2:
+ deprecation:
+ removal_version: 4.0.0
+ warning_text: >-
+ The ec2 module is based upon a deprecated version of the AWS SDKs
+ and is deprecated in favor of the ec2_instance module.
+ Please update your tasks.
ec2_ami_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_ami_facts was renamed in Ansible 2.9 to ec2_ami_info.
- Please update your tasks.
+ ec2_ami_facts was renamed in Ansible 2.9 to ec2_ami_info.
+ Please update your tasks.
+ ec2_elb_lb:
+ redirect: amazon.aws.elb_classic_lb
ec2_eni_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_eni_facts was renamed in Ansible 2.9 to ec2_eni_info.
- Please update your tasks.
+ ec2_eni_facts was renamed in Ansible 2.9 to ec2_eni_info.
+ Please update your tasks.
ec2_group_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_group_facts was renamed in Ansible 2.9 to ec2_group_info.
- Please update your tasks.
+ ec2_group_facts was renamed in Ansible 2.9 to ec2_group_info.
+ Please update your tasks.
ec2_snapshot_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_snapshot_facts was renamed in Ansible 2.9 to ec2_snapshot_info.
- Please update your tasks.
+ ec2_snapshot_facts was renamed in Ansible 2.9 to ec2_snapshot_info.
+ Please update your tasks.
ec2_vol_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_vol_facts was renamed in Ansible 2.9 to ec2_vol_info.
- Please update your tasks.
+ ec2_vol_facts was renamed in Ansible 2.9 to ec2_vol_info.
+ Please update your tasks.
ec2_vpc_dhcp_option_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_vpc_dhcp_option_facts was renamed in Ansible 2.9 to
- ec2_vpc_dhcp_option_info. Please update your tasks.
+ ec2_vpc_dhcp_option_facts was renamed in Ansible 2.9 to
+ ec2_vpc_dhcp_option_info. Please update your tasks.
ec2_vpc_net_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_vpc_net_facts was renamed in Ansible 2.9 to ec2_vpc_net_info.
- Please update your tasks.
+ ec2_vpc_net_facts was renamed in Ansible 2.9 to ec2_vpc_net_info.
+ Please update your tasks.
ec2_vpc_subnet_facts:
deprecation:
removal_date: 2021-12-01
warning_text: >-
- ec2_vpc_subnet_facts was renamed in Ansible 2.9 to
- ec2_vpc_subnet_info. Please update your tasks.
+ ec2_vpc_subnet_facts was renamed in Ansible 2.9 to
+ ec2_vpc_subnet_info. Please update your tasks.
+ ec2_vpc_endpoint_facts:
+ deprecation:
+ removal_date: 2021-12-01
+ warning_text: >-
+ ec2_vpc_endpoint_facts was renamed in Ansible 2.9 to
+ ec2_vpc_endpoint_info.
+ ec2_vpc_igw_facts:
+ deprecation:
+ removal_date: 2021-12-01
+ warning_text: >-
+ ec2_vpc_igw_facts was renamed in Ansible 2.9 to ec2_vpc_igw_info.
+ Please update your tasks.
+ ec2_vpc_route_table_facts:
+ deprecation:
+ removal_date: 2021-12-01
+ warning_text: >-
+ ec2_vpc_route_table_facts was renamed in Ansible 2.9 to
+ ec2_vpc_route_table_info.
+ Please update your tasks.
+ ec2_vpc_nat_gateway_facts:
+ deprecation:
+ removal_date: 2021-12-01
+ warning_text: >-
+ ec2_vpc_nat_gateway_facts was renamed in Ansible 2.9 to
+ ec2_vpc_nat_gateway_info.
+ Please update your tasks.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/doc_fragments/aws.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/doc_fragments/aws.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/doc_fragments/aws.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/doc_fragments/aws.py 2021-11-12 18:13:53.000000000 +0000
@@ -21,14 +21,14 @@
default: 'no'
ec2_url:
description:
- - Url to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints).
+ - URL to use to connect to EC2 or your Eucalyptus cloud (by default the module will use EC2 endpoints).
Ignored for modules where region is required. Must be specified for all other modules if region is not used.
If not set then the value of the EC2_URL environment variable, if any, is used.
type: str
aliases: [ aws_endpoint_url, endpoint_url ]
aws_secret_key:
description:
- - AWS secret key. If not set then the value of the AWS_SECRET_ACCESS_KEY, AWS_SECRET_KEY, or EC2_SECRET_KEY environment variable is used.
+ - C(AWS secret key). If not set then the value of the C(AWS_SECRET_ACCESS_KEY), C(AWS_SECRET_KEY), or C(EC2_SECRET_KEY) environment variable is used.
- If I(profile) is set this parameter is ignored.
- Passing the I(aws_secret_key) and I(profile) options at the same time has been deprecated
and the options will be made mutually exclusive after 2022-06-01.
@@ -36,7 +36,7 @@
aliases: [ ec2_secret_key, secret_key ]
aws_access_key:
description:
- - AWS access key. If not set then the value of the AWS_ACCESS_KEY_ID, AWS_ACCESS_KEY or EC2_ACCESS_KEY environment variable is used.
+ - C(AWS access key). If not set then the value of the C(AWS_ACCESS_KEY_ID), C(AWS_ACCESS_KEY) or C(EC2_ACCESS_KEY) environment variable is used.
- If I(profile) is set this parameter is ignored.
- Passing the I(aws_access_key) and I(profile) options at the same time has been deprecated
and the options will be made mutually exclusive after 2022-06-01.
@@ -44,7 +44,7 @@
aliases: [ ec2_access_key, access_key ]
security_token:
description:
- - AWS STS security token. If not set then the value of the AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN environment variable is used.
+ - C(AWS STS security token). If not set then the value of the C(AWS_SECURITY_TOKEN) or C(EC2_SECURITY_TOKEN) environment variable is used.
- If I(profile) is set this parameter is ignored.
- Passing the I(security_token) and I(profile) options at the same time has been deprecated
and the options will be made mutually exclusive after 2022-06-01.
@@ -53,17 +53,17 @@
aws_ca_bundle:
description:
- "The location of a CA Bundle to use when validating SSL certificates."
- - "Only used for boto3 based modules."
+ - "Not used by boto 2 based modules."
- "Note: The CA Bundle is read 'module' side and may need to be explicitly copied from the controller if not run locally."
type: path
validate_certs:
description:
- - When set to "no", SSL certificates will not be validated for boto versions >= 2.6.0.
+ - When set to "no", SSL certificates will not be validated for
+ communication with the AWS APIs.
type: bool
default: yes
profile:
description:
- - Uses a boto profile. Only works with boto >= 2.24.0.
- Using I(profile) will override I(aws_access_key), I(aws_secret_key) and I(security_token)
and support for passing them at the same time as I(profile) has been deprecated.
- I(aws_access_key), I(aws_secret_key) and I(security_token) will be made mutually exclusive with I(profile) after 2022-06-01.
@@ -76,8 +76,9 @@
- Only the 'user_agent' key is used for boto modules. See U(http://boto.cloudhackers.com/en/latest/boto_config_tut.html#boto) for more boto configuration.
type: dict
requirements:
- - python >= 2.6
- - boto
+ - python >= 3.6
+ - boto3 >= 1.15.0
+ - botocore >= 1.18.0
notes:
- If parameters are not set within the module, the following
environment variables can be used in decreasing order of precedence
@@ -88,8 +89,16 @@
C(AWS_SECURITY_TOKEN) or C(EC2_SECURITY_TOKEN),
C(AWS_REGION) or C(EC2_REGION),
C(AWS_CA_BUNDLE)
- - Ansible uses the boto configuration file (typically ~/.boto) if no
- credentials are provided. See https://boto.readthedocs.io/en/latest/boto_config_tut.html
+ - When no credentials are explicitly provided the AWS SDK (boto3) that
+ Ansible uses will fall back to its configuration files (typically
+ C(~/.aws/credentials)).
+ See U(https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html)
+ for more information.
+ - Modules based on the original AWS SDK (boto) may read their default
+ configuration from different files.
+ See U(https://boto.readthedocs.io/en/latest/boto_config_tut.html) for more
+ information.
- C(AWS_REGION) or C(EC2_REGION) can be typically be used to specify the
- AWS region, when required, but this can also be configured in the boto config file
+ AWS region, when required, but this can also be defined in the
+ configuration files.
'''
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/inventory/aws_ec2.py 2021-11-12 18:13:53.000000000 +0000
@@ -188,6 +188,23 @@
exclude_filters:
- tag:Name:
- 'my_first_tag'
+
+# Example using groups to assign the running hosts to a group based on vpc_id
+plugin: aws_ec2
+boto_profile: aws_profile
+# Populate inventory with instances in these regions
+regions:
+ - us-east-2
+filters:
+ # All instances with their state as `running`
+ instance-state-name: running
+keyed_groups:
+ - prefix: tag
+ key: tags
+compose:
+ ansible_host: public_dns_name
+groups:
+ libvpc: vpc_id == 'vpc-####'
'''
import re
@@ -205,7 +222,6 @@
from ansible.plugins.inventory import Cacheable
from ansible.plugins.inventory import Constructable
from ansible.template import Templar
-from ansible.utils.display import Display
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO3
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
@@ -471,7 +487,7 @@
for connection, region in self._boto3_conn(regions):
try:
# By default find non-terminated/terminating instances
- if not any([f['Name'] == 'instance-state-name' for f in filters]):
+ if not any(f['Name'] == 'instance-state-name' for f in filters):
filters.append({'Name': 'instance-state-name', 'Values': ['running', 'pending', 'stopping', 'stopped']})
paginator = connection.get_paginator('describe_instances')
reservations = paginator.paginate(Filters=filters).build_full_result().get('Reservations')
@@ -694,6 +710,14 @@
self.display.debug("aws_ec2 inventory filename must end with 'aws_ec2.yml' or 'aws_ec2.yaml'")
return False
+ def build_include_filters(self):
+ if self.get_option('filters'):
+ return [self.get_option('filters')] + self.get_option('include_filters')
+ elif self.get_option('include_filters'):
+ return self.get_option('include_filters')
+ else: # no filter
+ return [{}]
+
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
@@ -710,7 +734,7 @@
# get user specifications
regions = self.get_option('regions')
- include_filters = [self.get_option('filters')] + self.get_option('include_filters')
+ include_filters = self.build_include_filters()
exclude_filters = self.get_option('exclude_filters')
hostnames = self.get_option('hostnames')
strict_permissions = self.get_option('strict_permissions')
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/inventory/aws_rds.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/inventory/aws_rds.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/inventory/aws_rds.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/inventory/aws_rds.py 2021-11-12 18:13:53.000000000 +0000
@@ -39,6 +39,8 @@
iam_role_arn:
description: The ARN of the IAM role to assume to perform the inventory lookup. You should still provide
AWS credentials with enough privilege to perform the AssumeRole action.
+ note:
+ Ansible versions prior to 2.10 should use the fully qualified plugin name 'amazon.aws.aws_rds'.
extends_documentation_fragment:
- inventory_cache
- constructed
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_account_attribute.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_account_attribute.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_account_attribute.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_account_attribute.py 2021-11-12 18:13:53.000000000 +0000
@@ -8,8 +8,9 @@
author:
- Sloane Hertel
requirements:
+ - python >= 3.6
- boto3
- - botocore
+ - botocore >= 1.18.0
extends_documentation_fragment:
- amazon.aws.aws_credentials
- amazon.aws.aws_region
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_secret.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_secret.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_secret.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_secret.py 2021-11-12 18:13:53.000000000 +0000
@@ -9,8 +9,9 @@
author:
- Aaron Smith
requirements:
+ - python >= 3.6
- boto3
- - botocore>=1.10.0
+ - botocore >= 1.18.0
extends_documentation_fragment:
- amazon.aws.aws_credentials
- amazon.aws.aws_region
@@ -48,6 +49,16 @@
- No effect when used with I(bypath).
type: boolean
default: false
+ on_deleted:
+ description:
+ - Action to take if the secret has been marked for deletion.
+ - C(error) will raise a fatal error when the secret has been marked for deletion.
+ - C(skip) will silently ignore the deleted secret.
+ - C(warn) will skip over the deleted secret but issue a warning.
+ default: error
+ type: string
+ choices: ['error', 'skip', 'warn']
+ version_added: 2.0.0
on_missing:
description:
- Action to take if the secret is missing.
@@ -94,6 +105,14 @@
debug: msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', nested=true) }}"
# The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
# If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
+ - name: lookup secretsmanager secret in a specific region using specified region and aws profile using nested feature
+ debug: >
+ msg="{{ lookup('amazon.aws.aws_secret', 'secrets.environments.production.password', region=region, aws_profile=aws_profile,
+ aws_access_key=aws_access_key, aws_secret_key=aws_secret_key, nested=true) }}"
+ # The secret can be queried using the following syntax: `aws_secret_object_name.key1.key2.key3`.
+ # If an object is of the form `{"key1":{"key2":{"key3":1}}}` the query would return the value `1`.
+ # Region is the AWS region where the AWS secret is stored.
+ # AWS_profile is the aws profile to use, that has access to the AWS secret.
"""
RETURN = r"""
@@ -116,6 +135,7 @@
from ansible.plugins.lookup import LookupBase
from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_message
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO3
@@ -139,7 +159,7 @@
def run(self, terms, variables=None, boto_profile=None, aws_profile=None,
aws_secret_key=None, aws_access_key=None, aws_security_token=None, region=None,
bypath=False, nested=False, join=False, version_stage=None, version_id=None, on_missing='error',
- on_denied='error'):
+ on_denied='error', on_deleted='error'):
'''
:arg terms: a list of lookups to run.
e.g. ['parameter_name', 'parameter_name_too' ]
@@ -155,12 +175,17 @@
:kwarg version_stage: Stage of the secret version
:kwarg version_id: Version of the secret(s)
:kwarg on_missing: Action to take if the secret is missing
+ :kwarg on_deleted: Action to take if the secret is marked for deletion
:kwarg on_denied: Action to take if access to the secret is denied
:returns: A list of parameter values or a list of dictionaries if bypath=True.
'''
if not HAS_BOTO3:
raise AnsibleError('botocore and boto3 are required for aws_ssm lookup.')
+ deleted = on_deleted.lower()
+ if not isinstance(deleted, string_types) or deleted not in ['error', 'warn', 'skip']:
+ raise AnsibleError('"on_deleted" must be a string and one of "error", "warn" or "skip", not %s' % deleted)
+
missing = on_missing.lower()
if not isinstance(missing, string_types) or missing not in ['error', 'warn', 'skip']:
raise AnsibleError('"on_missing" must be a string and one of "error", "warn" or "skip", not %s' % missing)
@@ -208,7 +233,8 @@
for term in terms:
value = self.get_secret_value(term, client,
version_stage=version_stage, version_id=version_id,
- on_missing=missing, on_denied=denied, nested=nested)
+ on_missing=missing, on_denied=denied, on_deleted=deleted,
+ nested=nested)
if value:
secrets.append(value)
if join:
@@ -218,7 +244,7 @@
return secrets
- def get_secret_value(self, term, client, version_stage=None, version_id=None, on_missing=None, on_denied=None, nested=False):
+ def get_secret_value(self, term, client, version_stage=None, version_id=None, on_missing=None, on_denied=None, on_deleted=None, nested=False):
params = {}
params['SecretId'] = term
if version_id:
@@ -249,7 +275,12 @@
return str(ret_val)
else:
return response['SecretString']
- except is_boto3_error_code('ResourceNotFoundException'):
+ except is_boto3_error_message('marked for deletion'):
+ if on_deleted == 'error':
+ raise AnsibleError("Failed to find secret %s (marked for deletion)" % term)
+ elif on_deleted == 'warn':
+ self._display.warning('Skipping, did not find secret (marked for deletion) %s' % term)
+ except is_boto3_error_code('ResourceNotFoundException'): # pylint: disable=duplicate-except
if on_missing == 'error':
raise AnsibleError("Failed to find secret %s (ResourceNotFound)" % term)
elif on_missing == 'warn':
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_service_ip_ranges.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_service_ip_ranges.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_service_ip_ranges.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_service_ip_ranges.py 2021-11-12 18:13:53.000000000 +0000
@@ -19,6 +19,9 @@
description: 'The service to filter ranges by. Options: EC2, S3, CLOUDFRONT, CODEbUILD, ROUTE53, ROUTE53_HEALTHCHECKS'
region:
description: 'The AWS region to narrow the ranges to. Examples: us-east-1, eu-west-2, ap-southeast-1'
+ ipv6_prefixes:
+ description: 'When I(ipv6_prefixes=True) the lookup will return ipv6 addresses instead of ipv4 addresses'
+ version_added: 2.1.0
'''
EXAMPLES = """
@@ -40,7 +43,6 @@
description: comma-separated list of CIDR ranges
"""
-
import json
from ansible.errors import AnsibleError
@@ -55,9 +57,16 @@
class LookupModule(LookupBase):
def run(self, terms, variables, **kwargs):
+ if "ipv6_prefixes" in kwargs and kwargs["ipv6_prefixes"]:
+ prefixes_label = "ipv6_prefixes"
+ ip_prefix_label = "ipv6_prefix"
+ else:
+ prefixes_label = "prefixes"
+ ip_prefix_label = "ip_prefix"
+
try:
resp = open_url('https://ip-ranges.amazonaws.com/ip-ranges.json')
- amazon_response = json.load(resp)['prefixes']
+ amazon_response = json.load(resp)[prefixes_label]
except getattr(json.decoder, 'JSONDecodeError', ValueError) as e:
# on Python 3+, json.decoder.JSONDecodeError is raised for bad
# JSON. On 2.x it's a ValueError
@@ -77,5 +86,5 @@
if 'service' in kwargs:
service = str.upper(kwargs['service'])
amazon_response = (item for item in amazon_response if item['service'] == service)
-
- return [item['ip_prefix'] for item in amazon_response]
+ iprange = [item[ip_prefix_label] for item in amazon_response]
+ return iprange
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_ssm.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_ssm.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/lookup/aws_ssm.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/lookup/aws_ssm.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,8 +14,9 @@
- Marat Bakeev
- Michael De La Rue
requirements:
+ - python >= 3.6
- boto3
- - botocore
+ - botocore >= 1.18.0
short_description: Get the value for a SSM parameter or all parameters under a path.
description:
- Get the value for an Amazon Simple Systems Manager parameter or a hierarchy of parameters.
@@ -53,6 +54,26 @@
description: Indicates whether to return the name only without path if using a parameter hierarchy.
default: false
type: boolean
+ on_missing:
+ description:
+ - Action to take if the SSM parameter is missing.
+ - C(error) will raise a fatal error when the SSM parameter is missing.
+ - C(skip) will silently ignore the missing SSM parameter.
+ - C(warn) will skip over the missing SSM parameter but issue a warning.
+ default: error
+ type: string
+ choices: ['error', 'skip', 'warn']
+ version_added: 2.0.0
+ on_denied:
+ description:
+ - Action to take if access to the SSM parameter is denied.
+ - C(error) will raise a fatal error when access to the SSM parameter is denied.
+ - C(skip) will silently ignore the denied SSM parameter.
+ - C(warn) will skip over the denied SSM parameter but issue a warning.
+ default: error
+ type: string
+ choices: ['error', 'skip', 'warn']
+ version_added: 2.0.0
'''
EXAMPLES = '''
@@ -103,6 +124,11 @@
debug: msg='Path contains {{ item }}'
loop: '{{ lookup("aws_ssm", "/demo/", "/demo1/", bypath=True)}}'
+- name: lookup ssm parameter and fail if missing
+ debug: msg="{{ lookup('aws_ssm', 'missing-parameter', on_missing="error" ) }}"
+
+- name: lookup ssm parameter warn if access is denied
+ debug: msg="{{ lookup('aws_ssm', 'missing-parameter', on_denied="warn" ) }}"
'''
try:
@@ -115,9 +141,11 @@
from ansible.module_utils._text import to_native
from ansible.plugins.lookup import LookupBase
from ansible.utils.display import Display
+from ansible.module_utils.six import string_types
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import HAS_BOTO3
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
display = Display()
@@ -145,7 +173,8 @@
class LookupModule(LookupBase):
def run(self, terms, variables=None, boto_profile=None, aws_profile=None,
aws_secret_key=None, aws_access_key=None, aws_security_token=None, region=None,
- bypath=False, shortnames=False, recursive=False, decrypt=True):
+ bypath=False, shortnames=False, recursive=False, decrypt=True, on_missing="skip",
+ on_denied="skip"):
'''
:arg terms: a list of lookups to run.
e.g. ['parameter_name', 'parameter_name_too' ]
@@ -157,14 +186,21 @@
:kwarg region: AWS region in which to do the lookup
:kwarg bypath: Set to True to do a lookup of variables under a path
:kwarg recursive: Set to True to recurse below the path (requires bypath=True)
+ :kwarg on_missing: Action to take if the SSM parameter is missing
+ :kwarg on_denied: Action to take if access to the SSM parameter is denied
:returns: A list of parameter values or a list of dictionaries if bypath=True.
'''
if not HAS_BOTO3:
raise AnsibleError('botocore and boto3 are required for aws_ssm lookup.')
+ # validate arguments 'on_missing' and 'on_denied'
+ if on_missing is not None and (not isinstance(on_missing, string_types) or on_missing.lower() not in ['error', 'warn', 'skip']):
+ raise AnsibleError('"on_missing" must be a string and one of "error", "warn" or "skip", not %s' % on_missing)
+ if on_denied is not None and (not isinstance(on_denied, string_types) or on_denied.lower() not in ['error', 'warn', 'skip']):
+ raise AnsibleError('"on_denied" must be a string and one of "error", "warn" or "skip", not %s' % on_denied)
+
ret = []
- response = {}
ssm_dict = {}
credentials = {}
@@ -213,21 +249,26 @@
# Lookup by parameter name - always returns a list with one or no entry.
else:
display.vvv("AWS_ssm name lookup term: %s" % terms)
- ssm_dict["Names"] = terms
- try:
- response = client.get_parameters(**ssm_dict)
- except botocore.exceptions.ClientError as e:
- raise AnsibleError("SSM lookup exception: {0}".format(to_native(e)))
- params = boto3_tag_list_to_ansible_dict(response['Parameters'], tag_name_key_name="Name",
- tag_value_key_name="Value")
- for i in terms:
- if i.split(':', 1)[0] in params:
- ret.append(params[i])
- elif i in response['InvalidParameters']:
- ret.append(None)
- else:
- raise AnsibleError("Ansible internal error: aws_ssm lookup failed to understand boto3 return value: {0}".format(str(response)))
- return ret
-
+ for term in terms:
+ ret.append(self.get_parameter_value(client, ssm_dict, term, on_missing.lower(), on_denied.lower()))
display.vvvv("AWS_ssm path lookup returning: %s " % str(ret))
return ret
+
+ def get_parameter_value(self, client, ssm_dict, term, on_missing, on_denied):
+ ssm_dict["Name"] = term
+ try:
+ response = client.get_parameter(**ssm_dict)
+ return response['Parameter']['Value']
+ except is_boto3_error_code('ParameterNotFound'):
+ if on_missing == 'error':
+ raise AnsibleError("Failed to find SSM parameter %s (ResourceNotFound)" % term)
+ elif on_missing == 'warn':
+ self._display.warning('Skipping, did not find SSM parameter %s' % term)
+ except is_boto3_error_code('AccessDeniedException'): # pylint: disable=duplicate-except
+ if on_denied == 'error':
+ raise AnsibleError("Failed to access SSM parameter %s (AccessDenied)" % term)
+ elif on_denied == 'warn':
+ self._display.warning('Skipping, access denied for SSM parameter %s' % term)
+ except botocore.exceptions.ClientError as e: # pylint: disable=duplicate-except
+ raise AnsibleError("SSM lookup exception: {0}".format(to_native(e)))
+ return None
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_az_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_az_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_az_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_az_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -29,8 +29,6 @@
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
-
-requirements: [botocore, boto3]
'''
EXAMPLES = '''
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_az_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_az_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_az_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_az_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -29,8 +29,6 @@
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
-
-requirements: [botocore, boto3]
'''
EXAMPLES = '''
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -20,7 +20,6 @@
- Ed Costello (@orthanc)
- Stijn Dubrul (@sdubrul)
-requirements: [ 'botocore', 'boto3' ]
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_caller_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -20,7 +20,6 @@
- Ed Costello (@orthanc)
- Stijn Dubrul (@sdubrul)
-requirements: [ 'botocore', 'boto3' ]
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_s3.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_s3.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/aws_s3.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/aws_s3.py 2021-11-12 18:13:53.000000000 +0000
@@ -13,8 +13,8 @@
short_description: manage objects in S3.
description:
- This module allows the user to manage S3 buckets and the objects within them. Includes support for creating and
- deleting both objects and buckets, retrieving objects as files or strings and generating download links.
- This module has a dependency on boto3 and botocore.
+ deleting both objects and buckets, retrieving objects as files or strings, generating download links and
+ copy of an object that is already stored in Amazon S3.
options:
bucket:
description:
@@ -23,11 +23,11 @@
type: str
dest:
description:
- - The destination file path when downloading an object/key with a GET operation.
+ - The destination file path when downloading an object/key with a C(GET) operation.
type: path
encrypt:
description:
- - When set for PUT mode, asks for server-side encryption.
+ - When set for PUT/COPY mode, asks for server-side encryption.
default: true
type: bool
encryption_mode:
@@ -46,7 +46,7 @@
type: int
headers:
description:
- - Custom headers for PUT operation, as a dictionary of C(key=value) and C(key=value,key=value).
+ - Custom headers for C(PUT) operation, as a dictionary of C(key=value) and C(key=value,key=value).
type: dict
marker:
description:
@@ -59,15 +59,22 @@
type: int
metadata:
description:
- - Metadata for PUT operation, as a dictionary of C(key=value) and C(key=value,key=value).
+ - Metadata for PUT/COPY operation, as a dictionary of C(key=value) and C(key=value,key=value).
type: dict
mode:
description:
- - Switches the module behaviour between C(put) (upload), C(get) (download), C(geturl) (return download url, Ansible 1.3+),
- C(getstr) (download object as string (1.3+)), C(list) (list keys, Ansible 2.0+), C(create) (bucket), C(delete) (bucket),
- and delobj (delete object, Ansible 2.0+).
+ - Switches the module behaviour between
+ - 'C(PUT): upload'
+ - 'C(GET): download'
+ - 'C(geturl): return download URL'
+ - 'C(getstr): download object as string'
+ - 'C(list): list keys'
+ - 'C(create): create bucket'
+ - 'C(delete): delete bucket'
+ - 'C(delobj): delete object'
+ - 'C(copy): copy object that is already stored in another bucket'
required: true
- choices: ['get', 'put', 'delete', 'create', 'geturl', 'getstr', 'delobj', 'list']
+ choices: ['get', 'put', 'delete', 'create', 'geturl', 'getstr', 'delobj', 'list', 'copy']
type: str
object:
description:
@@ -78,7 +85,8 @@
- This option lets the user set the canned permissions on the object/bucket that are created.
The permissions that can be set are C(private), C(public-read), C(public-read-write), C(authenticated-read) for a bucket or
C(private), C(public-read), C(public-read-write), C(aws-exec-read), C(authenticated-read), C(bucket-owner-read),
- C(bucket-owner-full-control) for an object. Multiple permissions can be specified as a list.
+ C(bucket-owner-full-control) for an object. Multiple permissions can be specified as a list; although only the first one
+ will be used during the initial upload of the file
default: ['private']
type: list
elements: str
@@ -93,7 +101,7 @@
type: str
overwrite:
description:
- - Force overwrite either locally on the filesystem or remotely with the object/key. Used with PUT and GET operations.
+ - Force overwrite either locally on the filesystem or remotely with the object/key. Used with C(PUT) and C(GET) operations.
- Must be a Boolean, C(always), C(never) or C(different).
- C(true) is the same as C(always).
- C(false) is equal to C(never).
@@ -117,7 +125,6 @@
dualstack:
description:
- Enables Amazon S3 Dual-Stack Endpoints, allowing S3 communications using both IPv4 and IPv6.
- - Requires at least botocore version 1.4.45.
type: bool
default: false
rgw:
@@ -127,28 +134,28 @@
type: bool
src:
description:
- - The source file path when performing a PUT operation.
- - Either I(content), I(content_base64) or I(src) must be specified for a PUT operation. Ignored otherwise.
+ - The source file path when performing a C(PUT) operation.
+ - Either I(content), I(content_base64) or I(src) must be specified for a C(PUT) operation. Ignored otherwise.
type: path
content:
description:
- - The content to PUT into an object.
+ - The content to C(PUT) into an object.
- The parameter value will be treated as a string and converted to UTF-8 before sending it to S3.
To send binary data, use the I(content_base64) parameter instead.
- - Either I(content), I(content_base64) or I(src) must be specified for a PUT operation. Ignored otherwise.
+ - Either I(content), I(content_base64) or I(src) must be specified for a C(PUT) operation. Ignored otherwise.
version_added: "1.3.0"
type: str
content_base64:
description:
- - The base64-encoded binary data to PUT into an object.
+ - The base64-encoded binary data to C(PUT) into an object.
- Use this if you need to put raw binary data, and don't forget to encode in base64.
- - Either I(content), I(content_base64) or I(src) must be specified for a PUT operation. Ignored otherwise.
+ - Either I(content), I(content_base64) or I(src) must be specified for a C(PUT) operation. Ignored otherwise.
version_added: "1.3.0"
type: str
ignore_nonexistent_bucket:
description:
- "Overrides initial bucket lookups in case bucket or iam policies are restrictive. Example: a user may have the
- GetObject permission but no other permissions. In this case using the option mode: get will fail without specifying
+ C(GetObject) permission but no other permissions. In this case using the option mode: get will fail without specifying
I(ignore_nonexistent_bucket=true)."
type: bool
default: false
@@ -156,10 +163,43 @@
description:
- KMS key id to use when encrypting objects using I(encrypting=aws:kms). Ignored if I(encryption) is not C(aws:kms).
type: str
-requirements: [ "boto3", "botocore" ]
+ tags:
+ description:
+ - Tags dict to apply to the S3 object.
+ type: dict
+ version_added: 2.0.0
+ purge_tags:
+ description:
+ - Whether or not to remove tags assigned to the S3 object if not specified in the playbook.
+ - To remove all tags set I(tags) to an empty dictionary in conjunction with this.
+ type: bool
+ default: True
+ version_added: 2.0.0
+ copy_src:
+ description:
+ - The source details of the object to copy.
+ - Required if I(mode) is C(copy).
+ type: dict
+ version_added: 2.0.0
+ suboptions:
+ bucket:
+ type: str
+ description:
+ - The name of the source bucket.
+ required: true
+ object:
+ type: str
+ description:
+ - key name of the source object.
+ required: true
+ version_id:
+ type: str
+ description:
+ - version ID of the source object.
author:
- "Lester Wade (@lwade)"
- "Sloane Hertel (@s-hertel)"
+ - "Alina Buzachis (@linabuzachis)"
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -265,6 +305,15 @@
bucket: mybucket
object: /my/desired/key.txt
mode: delobj
+
+- name: Copy an object already stored in another bucket
+ amazon.aws.aws_s3:
+ bucket: mybucket
+ object: /my/desired/key.txt
+ mode: copy
+ copy_src:
+ bucket: srcbucket
+ object: /source/key.txt
'''
RETURN = '''
@@ -304,6 +353,7 @@
import io
from ssl import SSLError
import base64
+import time
try:
import botocore
@@ -320,9 +370,12 @@
from ..module_utils.ec2 import AWSRetry
from ..module_utils.ec2 import boto3_conn
from ..module_utils.ec2 import get_aws_connection_info
+from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
+from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
from ..module_utils.s3 import HAS_MD5
from ..module_utils.s3 import calculate_etag
from ..module_utils.s3 import calculate_etag_content
+from ..module_utils.s3 import validate_bucket_name
IGNORE_S3_DROP_IN_EXCEPTIONS = ['XNotImplemented', 'NotImplemented']
@@ -469,7 +522,7 @@
module.fail_json_aws(e, msg="Failed while trying to delete %s." % obj)
-def create_dirkey(module, s3, bucket, obj, encrypt):
+def create_dirkey(module, s3, bucket, obj, encrypt, expiry):
if module.check_mode:
module.exit_json(msg="PUT operation skipped - running in check mode", changed=True)
try:
@@ -486,7 +539,20 @@
module.warn("PutObjectAcl is not implemented by your storage provider. Set the permissions parameters to the empty list to avoid this warning")
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
module.fail_json_aws(e, msg="Failed while creating object %s." % obj)
- module.exit_json(msg="Virtual directory %s created in bucket %s" % (obj, bucket), changed=True)
+
+ # Tags
+ tags, changed = ensure_tags(s3, module, bucket, obj)
+
+ try:
+ url = s3.generate_presigned_url(ClientMethod='put_object',
+ Params={'Bucket': bucket, 'Key': obj},
+ ExpiresIn=expiry)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Unable to generate presigned URL")
+
+ url = put_download_url(module, s3, bucket, obj, expiry)
+
+ module.exit_json(msg="Virtual directory %s created in bucket %s" % (obj, bucket), url=url, tags=tags, changed=True)
def path_check(path):
@@ -531,6 +597,13 @@
else:
extra['Metadata'][option] = metadata[option]
+ if module.params.get('permission'):
+ permissions = module.params['permission']
+ if isinstance(permissions, str):
+ extra['ACL'] = permissions
+ elif isinstance(permissions, list):
+ extra['ACL'] = permissions[0]
+
if 'ContentType' not in extra:
content_type = None
if src is not None:
@@ -554,13 +627,13 @@
module.warn("PutObjectAcl is not implemented by your storage provider. Set the permission parameters to the empty list to avoid this warning")
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
module.fail_json_aws(e, msg="Unable to set object ACL")
- try:
- url = s3.generate_presigned_url(ClientMethod='put_object',
- Params={'Bucket': bucket, 'Key': obj},
- ExpiresIn=expiry)
- except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
- module.fail_json_aws(e, msg="Unable to generate presigned URL")
- module.exit_json(msg="PUT operation complete", url=url, changed=True)
+
+ # Tags
+ tags, changed = ensure_tags(s3, module, bucket, obj)
+
+ url = put_download_url(module, s3, bucket, obj, expiry)
+
+ module.exit_json(msg="PUT operation complete", url=url, tags=tags, changed=True)
def download_s3file(module, s3, bucket, obj, dest, retries, version=None):
@@ -618,16 +691,72 @@
module.fail_json_aws(e, msg="Failed while getting contents of object %s as a string." % obj)
-def get_download_url(module, s3, bucket, obj, expiry, changed=True):
+def get_download_url(module, s3, bucket, obj, expiry, tags=None, changed=True):
try:
url = s3.generate_presigned_url(ClientMethod='get_object',
Params={'Bucket': bucket, 'Key': obj},
ExpiresIn=expiry)
- module.exit_json(msg="Download url:", url=url, expiry=expiry, changed=changed)
+ module.exit_json(msg="Download url:", url=url, tags=tags, expiry=expiry, changed=changed)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, msg="Failed while getting download url.")
+def put_download_url(module, s3, bucket, obj, expiry):
+ try:
+ url = s3.generate_presigned_url(ClientMethod='put_object',
+ Params={'Bucket': bucket, 'Key': obj},
+ ExpiresIn=expiry)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Unable to generate presigned URL")
+ return url
+
+
+def copy_object_to_bucket(module, s3, bucket, obj, encrypt, metadata, validate, d_etag):
+ if module.check_mode:
+ module.exit_json(msg="COPY operation skipped - running in check mode", changed=True)
+ try:
+ params = {'Bucket': bucket, 'Key': obj}
+ bucketsrc = {'Bucket': module.params['copy_src'].get('bucket'), 'Key': module.params['copy_src'].get('object')}
+ version = None
+ if module.params['copy_src'].get('version_id') is not None:
+ version = module.params['copy_src'].get('version_id')
+ bucketsrc.update({'VersionId': version})
+ keyrtn = key_check(module, s3, bucketsrc['Bucket'], bucketsrc['Key'], version=version, validate=validate)
+ if keyrtn:
+ s_etag = get_etag(s3, bucketsrc['Bucket'], bucketsrc['Key'], version=version)
+ if s_etag == d_etag:
+ # Tags
+ tags, changed = ensure_tags(s3, module, bucket, obj)
+ if not changed:
+ module.exit_json(msg="ETag from source and destination are the same", changed=False)
+ else:
+ params.update({'CopySource': bucketsrc})
+ if encrypt:
+ params['ServerSideEncryption'] = module.params['encryption_mode']
+ if module.params['encryption_kms_key_id'] and module.params['encryption_mode'] == 'aws:kms':
+ params['SSEKMSKeyId'] = module.params['encryption_kms_key_id']
+ if metadata:
+ params['Metadata'] = {}
+ # determine object metadata and extra arguments
+ for option in metadata:
+ extra_args_option = option_in_extra_args(option)
+ if extra_args_option is not None:
+ params[extra_args_option] = metadata[option]
+ else:
+ params['Metadata'][option] = metadata[option]
+
+ copy_result = s3.copy_object(**params)
+ for acl in module.params.get('permission'):
+ s3.put_object_acl(ACL=acl, Bucket=bucket, Key=obj)
+ # Tags
+ tags, changed = ensure_tags(s3, module, bucket, obj)
+ except is_boto3_error_code(IGNORE_S3_DROP_IN_EXCEPTIONS):
+ module.warn("PutObjectAcl is not implemented by your storage provider. Set the permissions parameters to the empty list to avoid this warning")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed while copying object %s from bucket %s." % (obj, module.params['copy_src'].get('Bucket')))
+ module.exit_json(msg="Object copied from bucket %s to bucket %s." % (bucketsrc['Bucket'], bucket), tags=tags, changed=True)
+
+
def is_fakes3(s3_url):
""" Return True if s3_url has scheme fakes3:// """
if s3_url is not None:
@@ -669,6 +798,80 @@
return boto3_conn(**params)
+def get_current_object_tags_dict(s3, bucket, obj, version=None):
+ try:
+ if version:
+ current_tags = s3.get_object_tagging(Bucket=bucket, Key=obj, VersionId=version).get('TagSet')
+ else:
+ current_tags = s3.get_object_tagging(Bucket=bucket, Key=obj).get('TagSet')
+ except is_boto3_error_code('NoSuchTagSet'):
+ return {}
+ except is_boto3_error_code('NoSuchTagSetError'): # pylint: disable=duplicate-except
+ return {}
+
+ return boto3_tag_list_to_ansible_dict(current_tags)
+
+
+@AWSRetry.jittered_backoff(max_delay=120, catch_extra_error_codes=['NoSuchBucket', 'OperationAborted'])
+def put_object_tagging(s3, bucket, obj, tags):
+ s3.put_object_tagging(Bucket=bucket, Key=obj, Tagging={'TagSet': ansible_dict_to_boto3_tag_list(tags)})
+
+
+@AWSRetry.jittered_backoff(max_delay=120, catch_extra_error_codes=['NoSuchBucket', 'OperationAborted'])
+def delete_object_tagging(s3, bucket, obj):
+ s3.delete_object_tagging(Bucket=bucket, Key=obj)
+
+
+def wait_tags_are_applied(module, s3, bucket, obj, expected_tags_dict, version=None):
+ for dummy in range(0, 12):
+ try:
+ current_tags_dict = get_current_object_tags_dict(s3, bucket, obj, version)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Failed to get object tags.")
+ if current_tags_dict != expected_tags_dict:
+ time.sleep(5)
+ else:
+ return current_tags_dict
+
+ module.fail_json(msg="Object tags failed to apply in the expected time.",
+ requested_tags=expected_tags_dict, live_tags=current_tags_dict)
+
+
+def ensure_tags(client, module, bucket, obj):
+ tags = module.params.get("tags")
+ purge_tags = module.params.get("purge_tags")
+ changed = False
+
+ try:
+ current_tags_dict = get_current_object_tags_dict(client, bucket, obj)
+ except is_boto3_error_code(IGNORE_S3_DROP_IN_EXCEPTIONS):
+ module.warn("GetObjectTagging is not implemented by your storage provider. Set the permission parameters to the empty list to avoid this warning.")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get object tags.")
+ else:
+ if tags is not None:
+ if not purge_tags:
+ # Ensure existing tags that aren't updated by desired tags remain
+ current_copy = current_tags_dict.copy()
+ current_copy.update(tags)
+ tags = current_copy
+ if current_tags_dict != tags:
+ if tags:
+ try:
+ put_object_tagging(client, bucket, obj, tags)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to update object tags.")
+ else:
+ if purge_tags:
+ try:
+ delete_object_tagging(client, bucket, obj)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to delete object tags.")
+ current_tags_dict = wait_tags_are_applied(module, client, bucket, obj, tags)
+ changed = True
+ return current_tags_dict, changed
+
+
def main():
argument_spec = dict(
bucket=dict(required=True),
@@ -680,7 +883,7 @@
marker=dict(default=""),
max_keys=dict(default=1000, type='int', no_log=False),
metadata=dict(type='dict'),
- mode=dict(choices=['get', 'put', 'delete', 'create', 'geturl', 'getstr', 'delobj', 'list'], required=True),
+ mode=dict(choices=['get', 'put', 'delete', 'create', 'geturl', 'getstr', 'delobj', 'list', 'copy'], required=True),
object=dict(),
permission=dict(type='list', elements='str', default=['private']),
version=dict(default=None),
@@ -694,7 +897,10 @@
content=dict(),
content_base64=dict(),
ignore_nonexistent_bucket=dict(default=False, type='bool'),
- encryption_kms_key_id=dict()
+ encryption_kms_key_id=dict(),
+ tags=dict(type='dict'),
+ purge_tags=dict(type='bool', default=True),
+ copy_src=dict(type='dict', options=dict(bucket=dict(required=True), object=dict(required=True), version_id=dict())),
)
module = AnsibleAWSModule(
argument_spec=argument_spec,
@@ -702,7 +908,8 @@
required_if=[['mode', 'put', ['object']],
['mode', 'get', ['dest', 'object']],
['mode', 'getstr', ['object']],
- ['mode', 'geturl', ['object']]],
+ ['mode', 'geturl', ['object']],
+ ['mode', 'copy', ['copy_src']]],
mutually_exclusive=[['content', 'content_base64', 'src']],
)
@@ -731,6 +938,8 @@
object_canned_acl = ["private", "public-read", "public-read-write", "aws-exec-read", "authenticated-read", "bucket-owner-read", "bucket-owner-full-control"]
bucket_canned_acl = ["private", "public-read", "public-read-write", "authenticated-read"]
+ validate_bucket_name(module, bucket)
+
if overwrite not in ['always', 'never', 'different']:
if module.boolean(overwrite):
overwrite = 'always'
@@ -768,9 +977,6 @@
if dualstack and s3_url is not None and 'amazonaws.com' not in s3_url:
module.fail_json(msg='dualstack only applies to AWS S3')
- if dualstack and not module.botocore_at_least('1.4.45'):
- module.fail_json(msg='dualstack requires botocore >= 1.4.45')
-
# rgw requires an explicit url
if rgw and not s3_url:
module.fail_json(msg='rgw flavour requires s3_url')
@@ -794,7 +1000,7 @@
# First, we check to see if the bucket exists, we get "bucket" returned.
bucketrtn = bucket_check(module, s3, bucket, validate=validate)
- if validate and mode not in ('create', 'put', 'delete') and not bucketrtn:
+ if validate and mode not in ('create', 'put', 'delete', 'copy') and not bucketrtn:
module.fail_json(msg="Source bucket cannot be found.")
if mode == 'get':
@@ -823,9 +1029,9 @@
# these were separated into the variables bucket_acl and object_acl above
if content is None and content_base64 is None and src is None:
- module.fail_json('Either content, content_base64 or src must be specified for PUT operations')
+ module.fail_json(msg='Either content, content_base64 or src must be specified for PUT operations')
if src is not None and not path_check(src):
- module.fail_json('Local object "%s" does not exist for PUT operation' % (src))
+ module.fail_json(msg='Local object "%s" does not exist for PUT operation' % (src))
keyrtn = None
if bucketrtn:
@@ -845,8 +1051,9 @@
if keyrtn and overwrite != 'always':
if overwrite == 'never' or etag_compare(module, s3, bucket, obj, version=version, local_file=src, content=bincontent):
- # Return the download URL for the existing object
- get_download_url(module, s3, bucket, obj, expiry, changed=False)
+ # Return the download URL for the existing object and ensure tags are updated
+ tags, tags_update = ensure_tags(s3, module, bucket, obj)
+ get_download_url(module, s3, bucket, obj, expiry, tags, changed=tags_update)
# only use valid object acls for the upload_s3file function
module.params['permission'] = object_acl
@@ -907,14 +1114,14 @@
else:
# setting valid object acls for the create_dirkey function
module.params['permission'] = object_acl
- create_dirkey(module, s3, bucket, dirobj, encrypt)
+ create_dirkey(module, s3, bucket, dirobj, encrypt, expiry)
else:
# only use valid bucket acls for the create_bucket function
module.params['permission'] = bucket_acl
created = create_bucket(module, s3, bucket, location)
# only use valid object acls for the create_dirkey function
module.params['permission'] = object_acl
- create_dirkey(module, s3, bucket, dirobj, encrypt)
+ create_dirkey(module, s3, bucket, dirobj, encrypt, expiry)
# Support for grabbing the time-expired URL for an object in S3/Walrus.
if mode == 'geturl':
@@ -923,7 +1130,8 @@
keyrtn = key_check(module, s3, bucket, obj, version=version, validate=validate)
if keyrtn:
- get_download_url(module, s3, bucket, obj, expiry)
+ tags = get_current_object_tags_dict(s3, bucket, obj, version=version)
+ get_download_url(module, s3, bucket, obj, expiry, tags)
else:
module.fail_json(msg="Key %s does not exist." % obj)
@@ -941,6 +1149,21 @@
else:
module.fail_json(msg="Key %s does not exist." % obj)
+ if mode == 'copy':
+ # if copying an object in a bucket yet to be created, acls for the bucket and/or the object may be specified
+ # these were separated into the variables bucket_acl and object_acl above
+ d_etag = None
+ if bucketrtn:
+ d_etag = get_etag(s3, bucket, obj)
+ else:
+ # If the bucket doesn't exist we should create it.
+ # only use valid bucket acls for create_bucket function
+ module.params['permission'] = bucket_acl
+ create_bucket(module, s3, bucket, location)
+ # only use valid object acls for the copy operation
+ module.params['permission'] = object_acl
+ copy_object_to_bucket(module, s3, bucket, obj, encrypt, metadata, validate, d_etag)
+
module.exit_json(failed=False)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,9 +15,6 @@
- Gets information about an AWS CloudFormation stack.
- This module was called C(amazon.aws.cloudformation_facts) before Ansible 2.9, returning C(ansible_facts).
Note that the M(amazon.aws.cloudformation_info) module no longer returns C(ansible_facts)!
-requirements:
- - boto3 >= 1.0.0
- - python >= 2.6
author:
- Justin Menga (@jmenga)
- Kevin Coming (@waffie1)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/cloudformation_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,9 +15,6 @@
- Gets information about an AWS CloudFormation stack.
- This module was called C(amazon.aws.cloudformation_facts) before Ansible 2.9, returning C(ansible_facts).
Note that the M(amazon.aws.cloudformation_info) module no longer returns C(ansible_facts)!
-requirements:
- - boto3 >= 1.0.0
- - python >= 2.6
author:
- Justin Menga (@jmenga)
- Kevin Coming (@waffie1)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/cloudformation.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/cloudformation.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/cloudformation.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/cloudformation.py 2021-11-12 18:13:53.000000000 +0000
@@ -128,7 +128,7 @@
type: str
termination_protection:
description:
- - Enable or disable termination protection on the stack. Only works with botocore >= 1.7.18.
+ - Enable or disable termination protection on the stack.
type: bool
template_body:
description:
@@ -174,8 +174,6 @@
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
-
-requirements: [ boto3, botocore>=1.5.45 ]
'''
EXAMPLES = '''
@@ -344,10 +342,15 @@
from ansible.module_utils._text import to_native
from ..module_utils.core import AnsibleAWSModule
+from ..module_utils.core import is_boto3_error_message
from ..module_utils.ec2 import AWSRetry
from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
from ..module_utils.ec2 import boto_exception
+# Set a default, mostly for our integration tests. This will be overridden in
+# the main() loop to match the parameters we're passed
+retry_decorator = AWSRetry.jittered_backoff()
+
def get_stack_events(cfn, stack_name, events_limit, token_filter=None):
'''This event data was never correct, it worked as a side effect. So the v2.3 format is different.'''
@@ -361,17 +364,16 @@
PaginationConfig={'MaxItems': events_limit}
)
if token_filter is not None:
- events = list(pg.search(
+ events = list(retry_decorator(pg.search)(
"StackEvents[?ClientRequestToken == '{0}']".format(token_filter)
))
else:
events = list(pg.search("StackEvents[*]"))
- except (botocore.exceptions.ValidationError, botocore.exceptions.ClientError) as err:
+ except is_boto3_error_message('does not exist'):
+ ret['log'].append('Stack does not exist.')
+ return ret
+ except (botocore.exceptions.ValidationError, botocore.exceptions.ClientError) as err: # pylint: disable=duplicate-except
error_msg = boto_exception(err)
- if 'does not exist' in error_msg:
- # missing stack, don't bail.
- ret['log'].append('Stack does not exist.')
- return ret
ret['log'].append('Unknown error: ' + str(error_msg))
return ret
@@ -400,15 +402,12 @@
if module.params.get('create_timeout') is not None:
stack_params['TimeoutInMinutes'] = module.params['create_timeout']
if module.params.get('termination_protection') is not None:
- if boto_supports_termination_protection(cfn):
- stack_params['EnableTerminationProtection'] = bool(module.params.get('termination_protection'))
- else:
- module.fail_json(msg="termination_protection parameter requires botocore >= 1.7.18")
+ stack_params['EnableTerminationProtection'] = bool(module.params.get('termination_protection'))
try:
- response = cfn.create_stack(**stack_params)
+ response = cfn.create_stack(aws_retry=True, **stack_params)
# Use stack ID to follow stack state in case of on_create_failure = DELETE
- result = stack_operation(cfn, response['StackId'], 'CREATE', events_limit, stack_params.get('ClientRequestToken', None))
+ result = stack_operation(module, cfn, response['StackId'], 'CREATE', events_limit, stack_params.get('ClientRequestToken', None))
except Exception as err:
module.fail_json_aws(err, msg="Failed to create stack {0}".format(stack_params.get('StackName')))
if not result:
@@ -417,7 +416,7 @@
def list_changesets(cfn, stack_name):
- res = cfn.list_change_sets(StackName=stack_name)
+ res = cfn.list_change_sets(aws_retry=True, StackName=stack_name)
return [cs['ChangeSetName'] for cs in res['Summaries']]
@@ -440,18 +439,19 @@
warning = 'WARNING: %d pending changeset(s) exist(s) for this stack!' % len(pending_changesets)
result = dict(changed=False, output='ChangeSet %s already exists.' % changeset_name, warnings=[warning])
else:
- cs = cfn.create_change_set(**stack_params)
+ cs = cfn.create_change_set(aws_retry=True, **stack_params)
# Make sure we don't enter an infinite loop
time_end = time.time() + 600
while time.time() < time_end:
try:
- newcs = cfn.describe_change_set(ChangeSetName=cs['Id'])
+ newcs = cfn.describe_change_set(aws_retry=True, ChangeSetName=cs['Id'])
except botocore.exceptions.BotoCoreError as err:
module.fail_json_aws(err)
if newcs['Status'] == 'CREATE_PENDING' or newcs['Status'] == 'CREATE_IN_PROGRESS':
time.sleep(1)
- elif newcs['Status'] == 'FAILED' and "The submitted information didn't contain changes" in newcs['StatusReason']:
- cfn.delete_change_set(ChangeSetName=cs['Id'])
+ elif newcs['Status'] == 'FAILED' and ("The submitted information didn't contain changes" in newcs['StatusReason']
+ or "No updates are to be performed" in newcs['StatusReason']):
+ cfn.delete_change_set(aws_retry=True, ChangeSetName=cs['Id'])
result = dict(changed=False,
output='The created Change Set did not contain any changes to this stack and was deleted.')
# a failed change set does not trigger any stack events so we just want to
@@ -461,17 +461,15 @@
break
# Lets not hog the cpu/spam the AWS API
time.sleep(1)
- result = stack_operation(cfn, stack_params['StackName'], 'CREATE_CHANGESET', events_limit)
+ result = stack_operation(module, cfn, stack_params['StackName'], 'CREATE_CHANGESET', events_limit)
result['change_set_id'] = cs['Id']
result['warnings'] = ['Created changeset named %s for stack %s' % (changeset_name, stack_params['StackName']),
'You can execute it using: aws cloudformation execute-change-set --change-set-name %s' % cs['Id'],
'NOTE that dependencies on this stack might fail due to pending changes!']
+ except is_boto3_error_message('No updates are to be performed.'):
+ result = dict(changed=False, output='Stack is already up-to-date.')
except Exception as err:
- error_msg = boto_exception(err)
- if 'No updates are to be performed.' in error_msg:
- result = dict(changed=False, output='Stack is already up-to-date.')
- else:
- module.fail_json_aws(err, msg='Failed to create change set')
+ module.fail_json_aws(err, msg='Failed to create change set')
if not result:
module.fail_json(msg="empty result")
@@ -489,14 +487,12 @@
# AWS will tell us if the stack template and parameters are the same and
# don't need to be updated.
try:
- cfn.update_stack(**stack_params)
- result = stack_operation(cfn, stack_params['StackName'], 'UPDATE', events_limit, stack_params.get('ClientRequestToken', None))
+ cfn.update_stack(aws_retry=True, **stack_params)
+ result = stack_operation(module, cfn, stack_params['StackName'], 'UPDATE', events_limit, stack_params.get('ClientRequestToken', None))
+ except is_boto3_error_message('No updates are to be performed.'):
+ result = dict(changed=False, output='Stack is already up-to-date.')
except Exception as err:
- error_msg = boto_exception(err)
- if 'No updates are to be performed.' in error_msg:
- result = dict(changed=False, output='Stack is already up-to-date.')
- else:
- module.fail_json_aws(err, msg="Failed to update stack {0}".format(stack_params.get('StackName')))
+ module.fail_json_aws(err, msg="Failed to update stack {0}".format(stack_params.get('StackName')))
if not result:
module.fail_json(msg="empty result")
return result
@@ -504,30 +500,24 @@
def update_termination_protection(module, cfn, stack_name, desired_termination_protection_state):
'''updates termination protection of a stack'''
- if not boto_supports_termination_protection(cfn):
- module.fail_json(msg="termination_protection parameter requires botocore >= 1.7.18")
- stack = get_stack_facts(cfn, stack_name)
+ stack = get_stack_facts(module, cfn, stack_name)
if stack:
if stack['EnableTerminationProtection'] is not desired_termination_protection_state:
try:
cfn.update_termination_protection(
+ aws_retry=True,
EnableTerminationProtection=desired_termination_protection_state,
StackName=stack_name)
except botocore.exceptions.ClientError as e:
module.fail_json_aws(e)
-def boto_supports_termination_protection(cfn):
- '''termination protection was added in botocore 1.7.18'''
- return hasattr(cfn, "update_termination_protection")
-
-
-def stack_operation(cfn, stack_name, operation, events_limit, op_token=None):
+def stack_operation(module, cfn, stack_name, operation, events_limit, op_token=None):
'''gets the status of a stack while it is created/updated/deleted'''
existed = []
while True:
try:
- stack = get_stack_facts(cfn, stack_name)
+ stack = get_stack_facts(module, cfn, stack_name, raise_errors=True)
existed.append('yes')
except Exception:
# If the stack previously existed, and now can't be found then it's
@@ -591,9 +581,9 @@
stack_params.pop('ClientRequestToken', None)
try:
- change_set = cfn.create_change_set(**stack_params)
+ change_set = cfn.create_change_set(aws_retry=True, **stack_params)
for i in range(60): # total time 5 min
- description = cfn.describe_change_set(ChangeSetName=change_set['Id'])
+ description = cfn.describe_change_set(aws_retry=True, ChangeSetName=change_set['Id'])
if description['Status'] in ('CREATE_COMPLETE', 'FAILED'):
break
time.sleep(5)
@@ -601,30 +591,28 @@
# if the changeset doesn't finish in 5 mins, this `else` will trigger and fail
module.fail_json(msg="Failed to create change set %s" % stack_params['ChangeSetName'])
- cfn.delete_change_set(ChangeSetName=change_set['Id'])
+ cfn.delete_change_set(aws_retry=True, ChangeSetName=change_set['Id'])
reason = description.get('StatusReason')
- if description['Status'] == 'FAILED' and "didn't contain changes" in description['StatusReason']:
- return {'changed': False, 'msg': reason, 'meta': description['StatusReason']}
+ if description['Status'] == 'FAILED' and ("didn't contain changes" in reason or "No updates are to be performed" in reason):
+ return {'changed': False, 'msg': reason, 'meta': reason}
return {'changed': True, 'msg': reason, 'meta': description['Changes']}
except (botocore.exceptions.ValidationError, botocore.exceptions.ClientError) as err:
module.fail_json_aws(err)
-def get_stack_facts(cfn, stack_name):
+def get_stack_facts(module, cfn, stack_name, raise_errors=False):
try:
- stack_response = cfn.describe_stacks(StackName=stack_name)
+ stack_response = cfn.describe_stacks(aws_retry=True, StackName=stack_name)
stack_info = stack_response['Stacks'][0]
- except (botocore.exceptions.ValidationError, botocore.exceptions.ClientError) as err:
- error_msg = boto_exception(err)
- if 'does not exist' in error_msg:
- # missing stack, don't bail.
- return None
-
- # other error, bail.
- raise err
+ except is_boto3_error_message('does not exist'):
+ return None
+ except (botocore.exceptions.ValidationError, botocore.exceptions.ClientError) as err: # pylint: disable=duplicate-except
+ if raise_errors:
+ raise err
+ module.fail_json_aws(err, msg="Failed to describe stack")
if stack_response and stack_response.get('Stacks', None):
stacks = stack_response['Stacks']
@@ -735,27 +723,16 @@
result = {}
- cfn = module.client('cloudformation')
-
# Wrap the cloudformation client methods that this module uses with
# automatic backoff / retry for throttling error codes
- backoff_wrapper = AWSRetry.jittered_backoff(
+ retry_decorator = AWSRetry.jittered_backoff(
retries=module.params.get('backoff_retries'),
delay=module.params.get('backoff_delay'),
max_delay=module.params.get('backoff_max_delay')
)
- cfn.describe_stack_events = backoff_wrapper(cfn.describe_stack_events)
- cfn.create_stack = backoff_wrapper(cfn.create_stack)
- cfn.list_change_sets = backoff_wrapper(cfn.list_change_sets)
- cfn.create_change_set = backoff_wrapper(cfn.create_change_set)
- cfn.update_stack = backoff_wrapper(cfn.update_stack)
- cfn.describe_stacks = backoff_wrapper(cfn.describe_stacks)
- cfn.list_stack_resources = backoff_wrapper(cfn.list_stack_resources)
- cfn.delete_stack = backoff_wrapper(cfn.delete_stack)
- if boto_supports_termination_protection(cfn):
- cfn.update_termination_protection = backoff_wrapper(cfn.update_termination_protection)
+ cfn = module.client('cloudformation', retry_decorator=retry_decorator)
- stack_info = get_stack_facts(cfn, stack_params['StackName'])
+ stack_info = get_stack_facts(module, cfn, stack_params['StackName'])
if module.check_mode:
if state == 'absent' and stack_info:
@@ -780,7 +757,7 @@
# format the stack output
- stack = get_stack_facts(cfn, stack_params['StackName'])
+ stack = get_stack_facts(module, cfn, stack_params['StackName'])
if stack is not None:
if result.get('stack_outputs') is None:
# always define stack_outputs, but it may be empty
@@ -788,7 +765,7 @@
for output in stack.get('Outputs', []):
result['stack_outputs'][output['OutputKey']] = output['OutputValue']
stack_resources = []
- reslist = cfn.list_stack_resources(StackName=stack_params['StackName'])
+ reslist = cfn.list_stack_resources(aws_retry=True, StackName=stack_params['StackName'])
for res in reslist.get('StackResourceSummaries', []):
stack_resources.append({
"logical_resource_id": res['LogicalResourceId'],
@@ -806,15 +783,15 @@
# so must describe the stack first
try:
- stack = get_stack_facts(cfn, stack_params['StackName'])
+ stack = get_stack_facts(module, cfn, stack_params['StackName'])
if not stack:
result = {'changed': False, 'output': 'Stack not found.'}
else:
if stack_params.get('RoleARN') is None:
- cfn.delete_stack(StackName=stack_params['StackName'])
+ cfn.delete_stack(aws_retry=True, StackName=stack_params['StackName'])
else:
- cfn.delete_stack(StackName=stack_params['StackName'], RoleARN=stack_params['RoleARN'])
- result = stack_operation(cfn, stack_params['StackName'], 'DELETE', module.params.get('events_limit'),
+ cfn.delete_stack(aws_retry=True, StackName=stack_params['StackName'], RoleARN=stack_params['RoleARN'])
+ result = stack_operation(module, cfn, stack_params['StackName'], 'DELETE', module.params.get('events_limit'),
stack_params.get('ClientRequestToken', None))
except Exception as err:
module.fail_json_aws(err)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -16,7 +16,6 @@
- This module was called C(amazon.aws.ec2_ami_facts) before Ansible 2.9. The usage did not change.
author:
- Prasad Katti (@prasadkatti)
-requirements: [ boto3 ]
options:
image_ids:
description: One or more image IDs.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -16,7 +16,6 @@
- This module was called C(amazon.aws.ec2_ami_facts) before Ansible 2.9. The usage did not change.
author:
- Prasad Katti (@prasadkatti)
-requirements: [ boto3 ]
options:
image_ids:
description: One or more image IDs.
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_ami.py 2021-11-12 18:13:53.000000000 +0000
@@ -372,9 +372,10 @@
from ..module_utils.core import AnsibleAWSModule
from ..module_utils.core import is_boto3_error_code
from ..module_utils.ec2 import AWSRetry
-from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
-from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
-from ..module_utils.ec2 import compare_aws_tags
+from ..module_utils.ec2 import ensure_ec2_tags
+from ..module_utils.ec2 import add_ec2_tags
+from ..module_utils.tagging import boto3_tag_list_to_ansible_dict
+from ..module_utils.tagging import boto3_tag_specifications
from ..module_utils.waiters import get_waiter
@@ -450,6 +451,13 @@
ramdisk_id = module.params.get('ramdisk_id')
sriov_net_support = module.params.get('sriov_net_support')
+ if module.check_mode:
+ image = connection.describe_images(Filters=[{'Name': 'name', 'Values': [str(name)]}])
+ if not image['Images']:
+ module.exit_json(changed=True, msg='Would have created a AMI if not in check mode.')
+ else:
+ module.exit_json(changed=False, msg='Error registering image: AMI name is already in use by another AMI')
+
try:
params = {
'Name': name,
@@ -457,7 +465,6 @@
}
block_device_mapping = None
-
# Remove empty values injected by using options
if device_mapping:
block_device_mapping = []
@@ -474,12 +481,22 @@
device = rename_item_if_exists(device, 'volume_size', 'VolumeSize', 'Ebs', attribute_type=int)
device = rename_item_if_exists(device, 'iops', 'Iops', 'Ebs')
device = rename_item_if_exists(device, 'encrypted', 'Encrypted', 'Ebs')
+
+ # The NoDevice parameter in Boto3 is a string. Empty string omits the device from block device mapping
+ # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.create_image
+ if 'NoDevice' in device:
+ if device['NoDevice'] is True:
+ device['NoDevice'] = ""
+ else:
+ del device['NoDevice']
block_device_mapping.append(device)
if block_device_mapping:
params['BlockDeviceMappings'] = block_device_mapping
if instance_id:
params['InstanceId'] = instance_id
params['NoReboot'] = no_reboot
+ if tags and module.botocore_at_least('1.19.30'):
+ params['TagSpecifications'] = boto3_tag_specifications(tags, types=['image', 'snapshot'])
image_id = connection.create_image(aws_retry=True, **params).get('ImageId')
else:
if architecture:
@@ -510,11 +527,15 @@
waiter = get_waiter(connection, 'image_available')
waiter.wait(ImageIds=[image_id], WaiterConfig=dict(Delay=delay, MaxAttempts=max_attempts))
- if tags:
- try:
- connection.create_tags(aws_retry=True, Resources=[image_id], Tags=ansible_dict_to_boto3_tag_list(tags))
- except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
- module.fail_json_aws(e, msg="Error tagging image")
+ if tags and 'TagSpecifications' not in params:
+ image_info = get_image_by_id(module, connection, image_id)
+ add_ec2_tags(connection, module, image_id, tags)
+ if image_info and image_info.get('BlockDeviceMappings'):
+ for mapping in image_info.get('BlockDeviceMappings'):
+ # We can only tag Ebs volumes
+ if 'Ebs' not in mapping:
+ continue
+ add_ec2_tags(connection, module, mapping.get('Ebs').get('SnapshotId'), tags)
if launch_permissions:
try:
@@ -552,6 +573,8 @@
# When trying to re-deregister an already deregistered image it doesn't raise an exception, it just returns an object without image attributes.
if 'ImageId' in image:
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have deregistered AMI if not in check mode.')
try:
connection.deregister_image(aws_retry=True, ImageId=image_id)
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
@@ -610,41 +633,30 @@
if to_add or to_remove:
try:
- connection.modify_image_attribute(aws_retry=True,
- ImageId=image_id, Attribute='launchPermission',
- LaunchPermission=dict(Add=to_add, Remove=to_remove))
+ if not module.check_mode:
+ connection.modify_image_attribute(aws_retry=True,
+ ImageId=image_id, Attribute='launchPermission',
+ LaunchPermission=dict(Add=to_add, Remove=to_remove))
changed = True
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Error updating launch permissions of image %s" % image_id)
desired_tags = module.params.get('tags')
if desired_tags is not None:
- current_tags = boto3_tag_list_to_ansible_dict(image.get('Tags'))
- tags_to_add, tags_to_remove = compare_aws_tags(current_tags, desired_tags, purge_tags=module.params.get('purge_tags'))
-
- if tags_to_remove:
- try:
- connection.delete_tags(aws_retry=True, Resources=[image_id], Tags=[dict(Key=tagkey) for tagkey in tags_to_remove])
- changed = True
- except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
- module.fail_json_aws(e, msg="Error updating tags")
-
- if tags_to_add:
- try:
- connection.create_tags(aws_retry=True, Resources=[image_id], Tags=ansible_dict_to_boto3_tag_list(tags_to_add))
- changed = True
- except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
- module.fail_json_aws(e, msg="Error updating tags")
+ changed |= ensure_ec2_tags(connection, module, image_id, tags=desired_tags, purge_tags=module.params.get('purge_tags'))
description = module.params.get('description')
if description and description != image['Description']:
try:
- connection.modify_image_attribute(aws_retry=True, Attribute='Description ', ImageId=image_id, Description=dict(Value=description))
+ if not module.check_mode:
+ connection.modify_image_attribute(aws_retry=True, Attribute='Description ', ImageId=image_id, Description=dict(Value=description))
changed = True
except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Error setting description for image %s" % image_id)
if changed:
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have updated AMI if not in check mode.')
module.exit_json(msg="AMI updated.", changed=True,
**get_ami_info(get_image_by_id(module, connection, image_id)))
else:
@@ -737,7 +749,8 @@
argument_spec=argument_spec,
required_if=[
['state', 'absent', ['image_id']],
- ]
+ ],
+ supports_check_mode=True,
)
# Using a required_one_of=[['name', 'image_id']] overrides the message that should be provided by
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_elb_lb.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_elb_lb.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_elb_lb.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_elb_lb.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,1333 +0,0 @@
-#!/usr/bin/python
-# Copyright: Ansible Project
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-
-DOCUMENTATION = '''
----
-module: ec2_elb_lb
-version_added: 1.0.0
-description:
- - Returns information about the load balancer.
- - Will be marked changed when called only if state is changed.
-short_description: Creates, updates or destroys an Amazon ELB.
-author:
- - "Jim Dalton (@jsdalton)"
-options:
- state:
- description:
- - Create or destroy the ELB.
- type: str
- choices: [ absent, present ]
- required: true
- name:
- description:
- - The name of the ELB.
- type: str
- required: true
- listeners:
- description:
- - List of ports/protocols for this ELB to listen on (see examples).
- type: list
- elements: dict
- purge_listeners:
- description:
- - Purge existing listeners on ELB that are not found in listeners.
- type: bool
- default: yes
- instance_ids:
- description:
- - List of instance ids to attach to this ELB.
- type: list
- elements: str
- purge_instance_ids:
- description:
- - Purge existing instance ids on ELB that are not found in instance_ids.
- type: bool
- default: no
- zones:
- description:
- - List of availability zones to enable on this ELB.
- type: list
- elements: str
- purge_zones:
- description:
- - Purge existing availability zones on ELB that are not found in zones.
- type: bool
- default: no
- security_group_ids:
- description:
- - A list of security groups to apply to the ELB.
- type: list
- elements: str
- security_group_names:
- description:
- - A list of security group names to apply to the ELB.
- type: list
- elements: str
- health_check:
- description:
- - An associative array of health check configuration settings (see examples).
- type: dict
- access_logs:
- description:
- - An associative array of access logs configuration settings (see examples).
- type: dict
- subnets:
- description:
- - A list of VPC subnets to use when creating ELB. Zones should be empty if using this.
- type: list
- elements: str
- purge_subnets:
- description:
- - Purge existing subnet on ELB that are not found in subnets.
- type: bool
- default: no
- scheme:
- description:
- - The scheme to use when creating the ELB. For a private VPC-visible ELB use C(internal).
- - If you choose to update your scheme with a different value the ELB will be destroyed and
- recreated. To update scheme you must use the option I(wait).
- type: str
- choices: ["internal", "internet-facing"]
- default: 'internet-facing'
- connection_draining_timeout:
- description:
- - Wait a specified timeout allowing connections to drain before terminating an instance.
- type: int
- idle_timeout:
- description:
- - ELB connections from clients and to servers are timed out after this amount of time.
- type: int
- cross_az_load_balancing:
- description:
- - Distribute load across all configured Availability Zones.
- - Defaults to C(false).
- type: bool
- stickiness:
- description:
- - An associative array of stickiness policy settings. Policy will be applied to all listeners (see examples).
- type: dict
- wait:
- description:
- - When specified, Ansible will check the status of the load balancer to ensure it has been successfully
- removed from AWS.
- type: bool
- default: no
- wait_timeout:
- description:
- - Used in conjunction with wait. Number of seconds to wait for the ELB to be terminated.
- - A maximum of 600 seconds (10 minutes) is allowed.
- type: int
- default: 60
- tags:
- description:
- - An associative array of tags. To delete all tags, supply an empty dict (C({})).
- type: dict
-
-extends_documentation_fragment:
-- amazon.aws.aws
-- amazon.aws.ec2
-
-'''
-
-EXAMPLES = """
-# Note: None of these examples set aws_access_key, aws_secret_key, or region.
-# It is assumed that their matching environment variables are set.
-
-# Basic provisioning example (non-VPC)
-
-- amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http # options are http, https, ssl, tcp
- load_balancer_port: 80
- instance_port: 80
- proxy_protocol: True
- - protocol: https
- load_balancer_port: 443
- instance_protocol: http # optional, defaults to value of protocol setting
- instance_port: 80
- # ssl certificate required for https or ssl
- ssl_certificate_id: "arn:aws:iam::123456789012:server-certificate/company/servercerts/ProdServerCert"
-
-# Internal ELB example
-
-- amazon.aws.ec2_elb_lb:
- name: "test-vpc"
- scheme: internal
- state: present
- instance_ids:
- - i-abcd1234
- purge_instance_ids: true
- subnets:
- - subnet-abcd1234
- - subnet-1a2b3c4d
- listeners:
- - protocol: http # options are http, https, ssl, tcp
- load_balancer_port: 80
- instance_port: 80
-
-# Configure a health check and the access logs
-- amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- health_check:
- ping_protocol: http # options are http, https, ssl, tcp
- ping_port: 80
- ping_path: "/index.html" # not required for tcp or ssl
- response_timeout: 5 # seconds
- interval: 30 # seconds
- unhealthy_threshold: 2
- healthy_threshold: 10
- access_logs:
- interval: 5 # minutes (defaults to 60)
- s3_location: "my-bucket" # This value is required if access_logs is set
- s3_prefix: "logs"
-
-# Ensure ELB is gone
-- amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: absent
-
-# Ensure ELB is gone and wait for check (for default timeout)
-- amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: absent
- wait: yes
-
-# Ensure ELB is gone and wait for check with timeout value
-- amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: absent
- wait: yes
- wait_timeout: 600
-
-# Normally, this module will purge any listeners that exist on the ELB
-# but aren't specified in the listeners parameter. If purge_listeners is
-# false it leaves them alone
-- amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- purge_listeners: no
-
-# Normally, this module will leave availability zones that are enabled
-# on the ELB alone. If purge_zones is true, then any extraneous zones
-# will be removed
-- amazon.aws.ec2_elb_lb:
- name: "test-please-delete"
- state: present
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- purge_zones: yes
-
-# Creates a ELB and assigns a list of subnets to it.
-- amazon.aws.ec2_elb_lb:
- state: present
- name: 'New ELB'
- security_group_ids: 'sg-123456, sg-67890'
- region: us-west-2
- subnets: 'subnet-123456,subnet-67890'
- purge_subnets: yes
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
-
-# Create an ELB with connection draining, increased idle timeout and cross availability
-# zone load balancing
-- amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- connection_draining_timeout: 60
- idle_timeout: 300
- cross_az_load_balancing: "yes"
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
-
-# Create an ELB with load balancer stickiness enabled
-- amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- stickiness:
- type: loadbalancer
- enabled: yes
- expiration: 300
-
-# Create an ELB with application stickiness enabled
-- amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- stickiness:
- type: application
- enabled: yes
- cookie: SESSIONID
-
-# Create an ELB and add tags
-- amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- tags:
- Name: "New ELB"
- stack: "production"
- client: "Bob"
-
-# Delete all tags from an ELB
-- amazon.aws.ec2_elb_lb:
- name: "New ELB"
- state: present
- region: us-east-1
- zones:
- - us-east-1a
- - us-east-1d
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- tags: {}
-"""
-
-import random
-import time
-
-try:
- import boto
- import boto.ec2.elb
- import boto.ec2.elb.attributes
- import boto.vpc
- from boto.ec2.elb.healthcheck import HealthCheck
- from boto.ec2.tag import Tag
-except ImportError:
- pass # Taken care of by ec2.HAS_BOTO
-
-from ansible.module_utils.six import string_types
-from ansible.module_utils._text import to_native
-
-from ..module_utils.core import AnsibleAWSModule
-from ..module_utils.ec2 import AnsibleAWSError
-from ..module_utils.ec2 import HAS_BOTO
-from ..module_utils.ec2 import connect_to_aws
-from ..module_utils.ec2 import get_aws_connection_info
-
-
-def _throttleable_operation(max_retries):
- def _operation_wrapper(op):
- def _do_op(*args, **kwargs):
- retry = 0
- while True:
- try:
- return op(*args, **kwargs)
- except boto.exception.BotoServerError as e:
- if retry < max_retries and e.code in \
- ("Throttling", "RequestLimitExceeded"):
- retry = retry + 1
- time.sleep(min(random.random() * (2 ** retry), 300))
- continue
- else:
- raise
- return _do_op
- return _operation_wrapper
-
-
-def _get_vpc_connection(module, region, aws_connect_params):
- try:
- return connect_to_aws(boto.vpc, region, **aws_connect_params)
- except (boto.exception.NoAuthHandlerFound, AnsibleAWSError) as e:
- module.fail_json_aws(e, 'Failed to connect to AWS')
-
-
-_THROTTLING_RETRIES = 5
-
-
-class ElbManager(object):
- """Handles ELB creation and destruction"""
-
- def __init__(self, module, name, listeners=None, purge_listeners=None,
- zones=None, purge_zones=None, security_group_ids=None,
- health_check=None, subnets=None, purge_subnets=None,
- scheme="internet-facing", connection_draining_timeout=None,
- idle_timeout=None,
- cross_az_load_balancing=None, access_logs=None,
- stickiness=None, wait=None, wait_timeout=None, tags=None,
- region=None,
- instance_ids=None, purge_instance_ids=None, **aws_connect_params):
-
- self.module = module
- self.name = name
- self.listeners = listeners
- self.purge_listeners = purge_listeners
- self.instance_ids = instance_ids
- self.purge_instance_ids = purge_instance_ids
- self.zones = zones
- self.purge_zones = purge_zones
- self.security_group_ids = security_group_ids
- self.health_check = health_check
- self.subnets = subnets
- self.purge_subnets = purge_subnets
- self.scheme = scheme
- self.connection_draining_timeout = connection_draining_timeout
- self.idle_timeout = idle_timeout
- self.cross_az_load_balancing = cross_az_load_balancing
- self.access_logs = access_logs
- self.stickiness = stickiness
- self.wait = wait
- self.wait_timeout = wait_timeout
- self.tags = tags
-
- self.aws_connect_params = aws_connect_params
- self.region = region
-
- self.changed = False
- self.status = 'gone'
- self.elb_conn = self._get_elb_connection()
-
- try:
- self.elb = self._get_elb()
- except boto.exception.BotoServerError as e:
- module.fail_json_aws(e, msg='Unable to get all load balancers')
-
- self.ec2_conn = self._get_ec2_connection()
-
- @_throttleable_operation(_THROTTLING_RETRIES)
- def ensure_ok(self):
- """Create the ELB"""
- if not self.elb:
- # Zones and listeners will be added at creation
- self._create_elb()
- else:
- if self._get_scheme():
- # the only way to change the scheme is by recreating the resource
- self.ensure_gone()
- self._create_elb()
- else:
- self._set_zones()
- self._set_security_groups()
- self._set_elb_listeners()
- self._set_subnets()
- self._set_health_check()
- # boto has introduced support for some ELB attributes in
- # different versions, so we check first before trying to
- # set them to avoid errors
- if self._check_attribute_support('connection_draining'):
- self._set_connection_draining_timeout()
- if self._check_attribute_support('connecting_settings'):
- self._set_idle_timeout()
- if self._check_attribute_support('cross_zone_load_balancing'):
- self._set_cross_az_load_balancing()
- if self._check_attribute_support('access_log'):
- self._set_access_log()
- # add sticky options
- self.select_stickiness_policy()
-
- # ensure backend server policies are correct
- self._set_backend_policies()
- # set/remove instance ids
- self._set_instance_ids()
-
- self._set_tags()
-
- def ensure_gone(self):
- """Destroy the ELB"""
- if self.elb:
- self._delete_elb()
- if self.wait:
- elb_removed = self._wait_for_elb_removed()
- # Unfortunately even though the ELB itself is removed quickly
- # the interfaces take longer so reliant security groups cannot
- # be deleted until the interface has registered as removed.
- elb_interface_removed = self._wait_for_elb_interface_removed()
- if not (elb_removed and elb_interface_removed):
- self.module.fail_json(msg='Timed out waiting for removal of load balancer.')
-
- def get_info(self):
- try:
- check_elb = self.elb_conn.get_all_load_balancers(self.name)[0]
- except Exception:
- check_elb = None
-
- if not check_elb:
- info = {
- 'name': self.name,
- 'status': self.status,
- 'region': self.region
- }
- else:
- try:
- lb_cookie_policy = check_elb.policies.lb_cookie_stickiness_policies[0].__dict__['policy_name']
- except Exception:
- lb_cookie_policy = None
- try:
- app_cookie_policy = check_elb.policies.app_cookie_stickiness_policies[0].__dict__['policy_name']
- except Exception:
- app_cookie_policy = None
-
- info = {
- 'name': check_elb.name,
- 'dns_name': check_elb.dns_name,
- 'zones': check_elb.availability_zones,
- 'security_group_ids': check_elb.security_groups,
- 'status': self.status,
- 'subnets': self.subnets,
- 'scheme': check_elb.scheme,
- 'hosted_zone_name': check_elb.canonical_hosted_zone_name,
- 'hosted_zone_id': check_elb.canonical_hosted_zone_name_id,
- 'lb_cookie_policy': lb_cookie_policy,
- 'app_cookie_policy': app_cookie_policy,
- 'proxy_policy': self._get_proxy_protocol_policy(),
- 'backends': self._get_backend_policies(),
- 'instances': [instance.id for instance in check_elb.instances],
- 'out_of_service_count': 0,
- 'in_service_count': 0,
- 'unknown_instance_state_count': 0,
- 'region': self.region
- }
-
- # status of instances behind the ELB
- if info['instances']:
- info['instance_health'] = [dict(
- instance_id=instance_state.instance_id,
- reason_code=instance_state.reason_code,
- state=instance_state.state
- ) for instance_state in self.elb_conn.describe_instance_health(self.name)]
- else:
- info['instance_health'] = []
-
- # instance state counts: InService or OutOfService
- if info['instance_health']:
- for instance_state in info['instance_health']:
- if instance_state['state'] == "InService":
- info['in_service_count'] += 1
- elif instance_state['state'] == "OutOfService":
- info['out_of_service_count'] += 1
- else:
- info['unknown_instance_state_count'] += 1
-
- if check_elb.health_check:
- info['health_check'] = {
- 'target': check_elb.health_check.target,
- 'interval': check_elb.health_check.interval,
- 'timeout': check_elb.health_check.timeout,
- 'healthy_threshold': check_elb.health_check.healthy_threshold,
- 'unhealthy_threshold': check_elb.health_check.unhealthy_threshold,
- }
-
- if check_elb.listeners:
- info['listeners'] = [self._api_listener_as_tuple(l)
- for l in check_elb.listeners]
- elif self.status == 'created':
- # When creating a new ELB, listeners don't show in the
- # immediately returned result, so just include the
- # ones that were added
- info['listeners'] = [self._listener_as_tuple(l)
- for l in self.listeners]
- else:
- info['listeners'] = []
-
- if self._check_attribute_support('connection_draining'):
- info['connection_draining_timeout'] = int(self.elb_conn.get_lb_attribute(self.name, 'ConnectionDraining').timeout)
-
- if self._check_attribute_support('connecting_settings'):
- info['idle_timeout'] = self.elb_conn.get_lb_attribute(self.name, 'ConnectingSettings').idle_timeout
-
- if self._check_attribute_support('cross_zone_load_balancing'):
- is_cross_az_lb_enabled = self.elb_conn.get_lb_attribute(self.name, 'CrossZoneLoadBalancing')
- if is_cross_az_lb_enabled:
- info['cross_az_load_balancing'] = 'yes'
- else:
- info['cross_az_load_balancing'] = 'no'
-
- # return stickiness info?
-
- info['tags'] = self.tags
-
- return info
-
- @_throttleable_operation(_THROTTLING_RETRIES)
- def _wait_for_elb_removed(self):
- polling_increment_secs = 15
- max_retries = (self.wait_timeout // polling_increment_secs)
- status_achieved = False
-
- for x in range(0, max_retries):
- try:
- self.elb_conn.get_all_lb_attributes(self.name)
- except (boto.exception.BotoServerError, Exception) as e:
- if "LoadBalancerNotFound" in e.code:
- status_achieved = True
- break
- else:
- time.sleep(polling_increment_secs)
-
- return status_achieved
-
- @_throttleable_operation(_THROTTLING_RETRIES)
- def _wait_for_elb_interface_removed(self):
- polling_increment_secs = 15
- max_retries = (self.wait_timeout // polling_increment_secs)
- status_achieved = False
-
- elb_interfaces = self.ec2_conn.get_all_network_interfaces(
- filters={'attachment.instance-owner-id': 'amazon-elb',
- 'description': 'ELB {0}'.format(self.name)})
-
- for x in range(0, max_retries):
- for interface in elb_interfaces:
- try:
- result = self.ec2_conn.get_all_network_interfaces(interface.id)
- if result == []:
- status_achieved = True
- break
- else:
- time.sleep(polling_increment_secs)
- except (boto.exception.BotoServerError, Exception) as e:
- if 'InvalidNetworkInterfaceID' in e.code:
- status_achieved = True
- break
- else:
- self.module.fail_json_aws(e, 'Failure while waiting for interface to be removed')
-
- return status_achieved
-
- @_throttleable_operation(_THROTTLING_RETRIES)
- def _get_elb(self):
- elbs = self.elb_conn.get_all_load_balancers()
- for elb in elbs:
- if self.name == elb.name:
- self.status = 'ok'
- return elb
-
- def _get_elb_connection(self):
- try:
- return connect_to_aws(boto.ec2.elb, self.region,
- **self.aws_connect_params)
- except (boto.exception.NoAuthHandlerFound, AnsibleAWSError) as e:
- self.module.fail_json_aws(e, 'Failure while connecting to AWS')
-
- def _get_ec2_connection(self):
- try:
- return connect_to_aws(boto.ec2, self.region,
- **self.aws_connect_params)
- except (boto.exception.NoAuthHandlerFound, Exception) as e:
- self.module.fail_json_aws(e, 'Failure while connecting to AWS')
-
- @_throttleable_operation(_THROTTLING_RETRIES)
- def _delete_elb(self):
- # True if succeeds, exception raised if not
- result = self.elb_conn.delete_load_balancer(name=self.name)
- if result:
- self.changed = True
- self.status = 'deleted'
-
- def _create_elb(self):
- listeners = [self._listener_as_tuple(l) for l in self.listeners]
- self.elb = self.elb_conn.create_load_balancer(name=self.name,
- zones=self.zones,
- security_groups=self.security_group_ids,
- complex_listeners=listeners,
- subnets=self.subnets,
- scheme=self.scheme)
- if self.elb:
- # HACK: Work around a boto bug in which the listeners attribute is
- # always set to the listeners argument to create_load_balancer, and
- # not the complex_listeners
- # We're not doing a self.elb = self._get_elb here because there
- # might be eventual consistency issues and it doesn't necessarily
- # make sense to wait until the ELB gets returned from the EC2 API.
- # This is necessary in the event we hit the throttling errors and
- # need to retry ensure_ok
- # See https://github.com/boto/boto/issues/3526
- self.elb.listeners = self.listeners
- self.changed = True
- self.status = 'created'
-
- def _create_elb_listeners(self, listeners):
- """Takes a list of listener tuples and creates them"""
- # True if succeeds, exception raised if not
- self.changed = self.elb_conn.create_load_balancer_listeners(self.name,
- complex_listeners=listeners)
-
- def _delete_elb_listeners(self, listeners):
- """Takes a list of listener tuples and deletes them from the elb"""
- ports = [l[0] for l in listeners]
-
- # True if succeeds, exception raised if not
- self.changed = self.elb_conn.delete_load_balancer_listeners(self.name,
- ports)
-
- def _set_elb_listeners(self):
- """
- Creates listeners specified by self.listeners; overwrites existing
- listeners on these ports; removes extraneous listeners
- """
- listeners_to_add = []
- listeners_to_remove = []
- listeners_to_keep = []
-
- # Check for any listeners we need to create or overwrite
- for listener in self.listeners:
- listener_as_tuple = self._listener_as_tuple(listener)
-
- # First we loop through existing listeners to see if one is
- # already specified for this port
- existing_listener_found = None
- for existing_listener in self.elb.listeners:
- # Since ELB allows only one listener on each incoming port, a
- # single match on the incoming port is all we're looking for
- if existing_listener[0] == int(listener['load_balancer_port']):
- existing_listener_found = self._api_listener_as_tuple(existing_listener)
- break
-
- if existing_listener_found:
- # Does it match exactly?
- if listener_as_tuple != existing_listener_found:
- # The ports are the same but something else is different,
- # so we'll remove the existing one and add the new one
- listeners_to_remove.append(existing_listener_found)
- listeners_to_add.append(listener_as_tuple)
- else:
- # We already have this listener, so we're going to keep it
- listeners_to_keep.append(existing_listener_found)
- else:
- # We didn't find an existing listener, so just add the new one
- listeners_to_add.append(listener_as_tuple)
-
- # Check for any extraneous listeners we need to remove, if desired
- if self.purge_listeners:
- for existing_listener in self.elb.listeners:
- existing_listener_tuple = self._api_listener_as_tuple(existing_listener)
- if existing_listener_tuple in listeners_to_remove:
- # Already queued for removal
- continue
- if existing_listener_tuple in listeners_to_keep:
- # Keep this one around
- continue
- # Since we're not already removing it and we don't need to keep
- # it, let's get rid of it
- listeners_to_remove.append(existing_listener_tuple)
-
- if listeners_to_remove:
- self._delete_elb_listeners(listeners_to_remove)
-
- if listeners_to_add:
- self._create_elb_listeners(listeners_to_add)
-
- def _api_listener_as_tuple(self, listener):
- """Adds ssl_certificate_id to ELB API tuple if present"""
- base_tuple = listener.get_complex_tuple()
- if listener.ssl_certificate_id and len(base_tuple) < 5:
- return base_tuple + (listener.ssl_certificate_id,)
- return base_tuple
-
- def _listener_as_tuple(self, listener):
- """Formats listener as a 4- or 5-tuples, in the order specified by the
- ELB API"""
- # N.B. string manipulations on protocols below (str(), upper()) is to
- # ensure format matches output from ELB API
- listener_list = [
- int(listener['load_balancer_port']),
- int(listener['instance_port']),
- str(listener['protocol'].upper()),
- ]
-
- # Instance protocol is not required by ELB API; it defaults to match
- # load balancer protocol. We'll mimic that behavior here
- if 'instance_protocol' in listener:
- listener_list.append(str(listener['instance_protocol'].upper()))
- else:
- listener_list.append(str(listener['protocol'].upper()))
-
- if 'ssl_certificate_id' in listener:
- listener_list.append(str(listener['ssl_certificate_id']))
-
- return tuple(listener_list)
-
- def _enable_zones(self, zones):
- try:
- self.elb.enable_zones(zones)
- except boto.exception.BotoServerError as e:
- self.module.fail_json_aws(e, msg='unable to enable zones')
-
- self.changed = True
-
- def _disable_zones(self, zones):
- try:
- self.elb.disable_zones(zones)
- except boto.exception.BotoServerError as e:
- self.module.fail_json_aws(e, msg='unable to disable zones')
- self.changed = True
-
- def _attach_subnets(self, subnets):
- self.elb_conn.attach_lb_to_subnets(self.name, subnets)
- self.changed = True
-
- def _detach_subnets(self, subnets):
- self.elb_conn.detach_lb_from_subnets(self.name, subnets)
- self.changed = True
-
- def _set_subnets(self):
- """Determine which subnets need to be attached or detached on the ELB"""
- if self.subnets:
- if self.purge_subnets:
- subnets_to_detach = list(set(self.elb.subnets) - set(self.subnets))
- subnets_to_attach = list(set(self.subnets) - set(self.elb.subnets))
- else:
- subnets_to_detach = None
- subnets_to_attach = list(set(self.subnets) - set(self.elb.subnets))
-
- if subnets_to_attach:
- self._attach_subnets(subnets_to_attach)
- if subnets_to_detach:
- self._detach_subnets(subnets_to_detach)
-
- def _get_scheme(self):
- """Determine if the current scheme is different than the scheme of the ELB"""
- if self.scheme:
- if self.elb.scheme != self.scheme:
- if not self.wait:
- self.module.fail_json(msg="Unable to modify scheme without using the wait option")
- return True
- return False
-
- def _set_zones(self):
- """Determine which zones need to be enabled or disabled on the ELB"""
- if self.zones:
- if self.purge_zones:
- zones_to_disable = list(set(self.elb.availability_zones) -
- set(self.zones))
- zones_to_enable = list(set(self.zones) -
- set(self.elb.availability_zones))
- else:
- zones_to_disable = None
- zones_to_enable = list(set(self.zones) -
- set(self.elb.availability_zones))
- if zones_to_enable:
- self._enable_zones(zones_to_enable)
- # N.B. This must come second, in case it would have removed all zones
- if zones_to_disable:
- self._disable_zones(zones_to_disable)
-
- def _set_security_groups(self):
- if self.security_group_ids is not None and set(self.elb.security_groups) != set(self.security_group_ids):
- self.elb_conn.apply_security_groups_to_lb(self.name, self.security_group_ids)
- self.changed = True
-
- def _set_health_check(self):
- """Set health check values on ELB as needed"""
- if self.health_check:
- # This just makes it easier to compare each of the attributes
- # and look for changes. Keys are attributes of the current
- # health_check; values are desired values of new health_check
- health_check_config = {
- "target": self._get_health_check_target(),
- "timeout": self.health_check['response_timeout'],
- "interval": self.health_check['interval'],
- "unhealthy_threshold": self.health_check['unhealthy_threshold'],
- "healthy_threshold": self.health_check['healthy_threshold'],
- }
-
- update_health_check = False
-
- # The health_check attribute is *not* set on newly created
- # ELBs! So we have to create our own.
- if not self.elb.health_check:
- self.elb.health_check = HealthCheck()
-
- for attr, desired_value in health_check_config.items():
- if getattr(self.elb.health_check, attr) != desired_value:
- setattr(self.elb.health_check, attr, desired_value)
- update_health_check = True
-
- if update_health_check:
- self.elb.configure_health_check(self.elb.health_check)
- self.changed = True
-
- def _check_attribute_support(self, attr):
- return hasattr(boto.ec2.elb.attributes.LbAttributes(), attr)
-
- def _set_cross_az_load_balancing(self):
- attributes = self.elb.get_attributes()
- if self.cross_az_load_balancing:
- if not attributes.cross_zone_load_balancing.enabled:
- self.changed = True
- attributes.cross_zone_load_balancing.enabled = True
- else:
- if attributes.cross_zone_load_balancing.enabled:
- self.changed = True
- attributes.cross_zone_load_balancing.enabled = False
- self.elb_conn.modify_lb_attribute(self.name, 'CrossZoneLoadBalancing',
- attributes.cross_zone_load_balancing.enabled)
-
- def _set_access_log(self):
- attributes = self.elb.get_attributes()
- if self.access_logs:
- if 's3_location' not in self.access_logs:
- self.module.fail_json(msg='s3_location information required')
-
- access_logs_config = {
- "enabled": True,
- "s3_bucket_name": self.access_logs['s3_location'],
- "s3_bucket_prefix": self.access_logs.get('s3_prefix', ''),
- "emit_interval": self.access_logs.get('interval', 60),
- }
-
- update_access_logs_config = False
- for attr, desired_value in access_logs_config.items():
- if getattr(attributes.access_log, attr) != desired_value:
- setattr(attributes.access_log, attr, desired_value)
- update_access_logs_config = True
- if update_access_logs_config:
- self.elb_conn.modify_lb_attribute(self.name, 'AccessLog', attributes.access_log)
- self.changed = True
- elif attributes.access_log.enabled:
- attributes.access_log.enabled = False
- self.changed = True
- self.elb_conn.modify_lb_attribute(self.name, 'AccessLog', attributes.access_log)
-
- def _set_connection_draining_timeout(self):
- attributes = self.elb.get_attributes()
- if self.connection_draining_timeout is not None:
- if not attributes.connection_draining.enabled or \
- attributes.connection_draining.timeout != self.connection_draining_timeout:
- self.changed = True
- attributes.connection_draining.enabled = True
- attributes.connection_draining.timeout = self.connection_draining_timeout
- self.elb_conn.modify_lb_attribute(self.name, 'ConnectionDraining', attributes.connection_draining)
- else:
- if attributes.connection_draining.enabled:
- self.changed = True
- attributes.connection_draining.enabled = False
- self.elb_conn.modify_lb_attribute(self.name, 'ConnectionDraining', attributes.connection_draining)
-
- def _set_idle_timeout(self):
- attributes = self.elb.get_attributes()
- if self.idle_timeout is not None:
- if attributes.connecting_settings.idle_timeout != self.idle_timeout:
- self.changed = True
- attributes.connecting_settings.idle_timeout = self.idle_timeout
- self.elb_conn.modify_lb_attribute(self.name, 'ConnectingSettings', attributes.connecting_settings)
-
- def _policy_name(self, policy_type):
- return 'ec2-elb-lb-{0}'.format(to_native(policy_type, errors='surrogate_or_strict'))
-
- def _create_policy(self, policy_param, policy_meth, policy):
- getattr(self.elb_conn, policy_meth)(policy_param, self.elb.name, policy)
-
- def _delete_policy(self, elb_name, policy):
- self.elb_conn.delete_lb_policy(elb_name, policy)
-
- def _update_policy(self, policy_param, policy_meth, policy_attr, policy):
- self._delete_policy(self.elb.name, policy)
- self._create_policy(policy_param, policy_meth, policy)
-
- def _set_listener_policy(self, listeners_dict, policy=None):
- policy = [] if policy is None else policy
-
- for listener_port in listeners_dict:
- if listeners_dict[listener_port].startswith('HTTP'):
- self.elb_conn.set_lb_policies_of_listener(self.elb.name, listener_port, policy)
-
- def _set_stickiness_policy(self, elb_info, listeners_dict, policy, **policy_attrs):
- for p in getattr(elb_info.policies, policy_attrs['attr']):
- if str(p.__dict__['policy_name']) == str(policy[0]):
- if str(p.__dict__[policy_attrs['dict_key']]) != str(policy_attrs['param_value'] or 0):
- self._set_listener_policy(listeners_dict)
- self._update_policy(policy_attrs['param_value'], policy_attrs['method'], policy_attrs['attr'], policy[0])
- self.changed = True
- break
- else:
- self._create_policy(policy_attrs['param_value'], policy_attrs['method'], policy[0])
- self.changed = True
-
- self._set_listener_policy(listeners_dict, policy)
-
- def select_stickiness_policy(self):
- if self.stickiness:
-
- if 'cookie' in self.stickiness and 'expiration' in self.stickiness:
- self.module.fail_json(msg='\'cookie\' and \'expiration\' can not be set at the same time')
-
- elb_info = self.elb_conn.get_all_load_balancers(self.elb.name)[0]
- d = {}
- for listener in elb_info.listeners:
- d[listener[0]] = listener[2]
- listeners_dict = d
-
- if self.stickiness['type'] == 'loadbalancer':
- policy = []
- policy_type = 'LBCookieStickinessPolicyType'
-
- if self.module.boolean(self.stickiness['enabled']):
-
- if 'expiration' not in self.stickiness:
- self.module.fail_json(msg='expiration must be set when type is loadbalancer')
-
- try:
- expiration = self.stickiness['expiration'] if int(self.stickiness['expiration']) else None
- except ValueError:
- self.module.fail_json(msg='expiration must be set to an integer')
-
- policy_attrs = {
- 'type': policy_type,
- 'attr': 'lb_cookie_stickiness_policies',
- 'method': 'create_lb_cookie_stickiness_policy',
- 'dict_key': 'cookie_expiration_period',
- 'param_value': expiration
- }
- policy.append(self._policy_name(policy_attrs['type']))
-
- self._set_stickiness_policy(elb_info, listeners_dict, policy, **policy_attrs)
- elif not self.module.boolean(self.stickiness['enabled']):
- if len(elb_info.policies.lb_cookie_stickiness_policies):
- if elb_info.policies.lb_cookie_stickiness_policies[0].policy_name == self._policy_name(policy_type):
- self.changed = True
- else:
- self.changed = False
- self._set_listener_policy(listeners_dict)
- self._delete_policy(self.elb.name, self._policy_name(policy_type))
-
- elif self.stickiness['type'] == 'application':
- policy = []
- policy_type = 'AppCookieStickinessPolicyType'
- if self.module.boolean(self.stickiness['enabled']):
-
- if 'cookie' not in self.stickiness:
- self.module.fail_json(msg='cookie must be set when type is application')
-
- policy_attrs = {
- 'type': policy_type,
- 'attr': 'app_cookie_stickiness_policies',
- 'method': 'create_app_cookie_stickiness_policy',
- 'dict_key': 'cookie_name',
- 'param_value': self.stickiness['cookie']
- }
- policy.append(self._policy_name(policy_attrs['type']))
- self._set_stickiness_policy(elb_info, listeners_dict, policy, **policy_attrs)
- elif not self.module.boolean(self.stickiness['enabled']):
- if len(elb_info.policies.app_cookie_stickiness_policies):
- if elb_info.policies.app_cookie_stickiness_policies[0].policy_name == self._policy_name(policy_type):
- self.changed = True
- self._set_listener_policy(listeners_dict)
- self._delete_policy(self.elb.name, self._policy_name(policy_type))
-
- else:
- self._set_listener_policy(listeners_dict)
-
- def _get_backend_policies(self):
- """Get a list of backend policies"""
- policies = []
- if self.elb.backends is not None:
- for backend in self.elb.backends:
- if backend.policies is not None:
- for policy in backend.policies:
- policies.append(str(backend.instance_port) + ':' + policy.policy_name)
-
- return policies
-
- def _set_backend_policies(self):
- """Sets policies for all backends"""
- ensure_proxy_protocol = False
- replace = []
- backend_policies = self._get_backend_policies()
-
- # Find out what needs to be changed
- for listener in self.listeners:
- want = False
-
- if 'proxy_protocol' in listener and listener['proxy_protocol']:
- ensure_proxy_protocol = True
- want = True
-
- if str(listener['instance_port']) + ':ProxyProtocol-policy' in backend_policies:
- if not want:
- replace.append({'port': listener['instance_port'], 'policies': []})
- elif want:
- replace.append({'port': listener['instance_port'], 'policies': ['ProxyProtocol-policy']})
-
- # enable or disable proxy protocol
- if ensure_proxy_protocol:
- self._set_proxy_protocol_policy()
-
- # Make the backend policies so
- for item in replace:
- self.elb_conn.set_lb_policies_of_backend_server(self.elb.name, item['port'], item['policies'])
- self.changed = True
-
- def _get_proxy_protocol_policy(self):
- """Find out if the elb has a proxy protocol enabled"""
- if self.elb.policies is not None and self.elb.policies.other_policies is not None:
- for policy in self.elb.policies.other_policies:
- if policy.policy_name == 'ProxyProtocol-policy':
- return policy.policy_name
-
- return None
-
- def _set_proxy_protocol_policy(self):
- """Install a proxy protocol policy if needed"""
- proxy_policy = self._get_proxy_protocol_policy()
-
- if proxy_policy is None:
- self.elb_conn.create_lb_policy(
- self.elb.name, 'ProxyProtocol-policy', 'ProxyProtocolPolicyType', {'ProxyProtocol': True}
- )
- self.changed = True
-
- # TODO: remove proxy protocol policy if not needed anymore? There is no side effect to leaving it there
-
- def _diff_list(self, a, b):
- """Find the entries in list a that are not in list b"""
- b = set(b)
- return [aa for aa in a if aa not in b]
-
- def _get_instance_ids(self):
- """Get the current list of instance ids installed in the elb"""
- instances = []
- if self.elb.instances is not None:
- for instance in self.elb.instances:
- instances.append(instance.id)
-
- return instances
-
- def _set_instance_ids(self):
- """Register or deregister instances from an lb instance"""
- assert_instances = self.instance_ids or []
-
- has_instances = self._get_instance_ids()
-
- add_instances = self._diff_list(assert_instances, has_instances)
- if add_instances:
- self.elb_conn.register_instances(self.elb.name, add_instances)
- self.changed = True
-
- if self.purge_instance_ids:
- remove_instances = self._diff_list(has_instances, assert_instances)
- if remove_instances:
- self.elb_conn.deregister_instances(self.elb.name, remove_instances)
- self.changed = True
-
- def _set_tags(self):
- """Add/Delete tags"""
- if self.tags is None:
- return
-
- params = {'LoadBalancerNames.member.1': self.name}
-
- tagdict = dict()
-
- # get the current list of tags from the ELB, if ELB exists
- if self.elb:
- current_tags = self.elb_conn.get_list('DescribeTags', params,
- [('member', Tag)])
- tagdict = dict((tag.Key, tag.Value) for tag in current_tags
- if hasattr(tag, 'Key'))
-
- # Add missing tags
- dictact = dict(set(self.tags.items()) - set(tagdict.items()))
- if dictact:
- for i, key in enumerate(dictact):
- params['Tags.member.%d.Key' % (i + 1)] = key
- params['Tags.member.%d.Value' % (i + 1)] = dictact[key]
-
- self.elb_conn.make_request('AddTags', params)
- self.changed = True
-
- # Remove extra tags
- dictact = dict(set(tagdict.items()) - set(self.tags.items()))
- if dictact:
- for i, key in enumerate(dictact):
- params['Tags.member.%d.Key' % (i + 1)] = key
-
- self.elb_conn.make_request('RemoveTags', params)
- self.changed = True
-
- def _get_health_check_target(self):
- """Compose target string from healthcheck parameters"""
- protocol = self.health_check['ping_protocol'].upper()
- path = ""
-
- if protocol in ['HTTP', 'HTTPS'] and 'ping_path' in self.health_check:
- path = self.health_check['ping_path']
-
- return "%s:%s%s" % (protocol, self.health_check['ping_port'], path)
-
-
-def main():
- argument_spec = dict(
- state={'required': True, 'choices': ['present', 'absent']},
- name={'required': True},
- listeners={'default': None, 'required': False, 'type': 'list', 'elements': 'dict'},
- purge_listeners={'default': True, 'required': False, 'type': 'bool'},
- instance_ids={'default': None, 'required': False, 'type': 'list', 'elements': 'str'},
- purge_instance_ids={'default': False, 'required': False, 'type': 'bool'},
- zones={'default': None, 'required': False, 'type': 'list', 'elements': 'str'},
- purge_zones={'default': False, 'required': False, 'type': 'bool'},
- security_group_ids={'default': None, 'required': False, 'type': 'list', 'elements': 'str'},
- security_group_names={'default': None, 'required': False, 'type': 'list', 'elements': 'str'},
- health_check={'default': None, 'required': False, 'type': 'dict'},
- subnets={'default': None, 'required': False, 'type': 'list', 'elements': 'str'},
- purge_subnets={'default': False, 'required': False, 'type': 'bool'},
- scheme={'default': 'internet-facing', 'required': False, 'choices': ['internal', 'internet-facing']},
- connection_draining_timeout={'default': None, 'required': False, 'type': 'int'},
- idle_timeout={'default': None, 'type': 'int', 'required': False},
- cross_az_load_balancing={'default': None, 'type': 'bool', 'required': False},
- stickiness={'default': None, 'required': False, 'type': 'dict'},
- access_logs={'default': None, 'required': False, 'type': 'dict'},
- wait={'default': False, 'type': 'bool', 'required': False},
- wait_timeout={'default': 60, 'type': 'int', 'required': False},
- tags={'default': None, 'required': False, 'type': 'dict'}
- )
-
- module = AnsibleAWSModule(
- argument_spec=argument_spec,
- check_boto3=False,
- mutually_exclusive=[['security_group_ids', 'security_group_names']]
- )
-
- if not HAS_BOTO:
- module.fail_json(msg='boto required for this module')
-
- region, ec2_url, aws_connect_params = get_aws_connection_info(module)
- if not region:
- module.fail_json(msg="Region must be specified as a parameter, in EC2_REGION or AWS_REGION environment variables or in boto configuration file")
-
- name = module.params['name']
- state = module.params['state']
- listeners = module.params['listeners']
- purge_listeners = module.params['purge_listeners']
- instance_ids = module.params['instance_ids']
- purge_instance_ids = module.params['purge_instance_ids']
- zones = module.params['zones']
- purge_zones = module.params['purge_zones']
- security_group_ids = module.params['security_group_ids']
- security_group_names = module.params['security_group_names']
- health_check = module.params['health_check']
- access_logs = module.params['access_logs']
- subnets = module.params['subnets']
- purge_subnets = module.params['purge_subnets']
- scheme = module.params['scheme']
- connection_draining_timeout = module.params['connection_draining_timeout']
- idle_timeout = module.params['idle_timeout']
- cross_az_load_balancing = module.params['cross_az_load_balancing']
- stickiness = module.params['stickiness']
- wait = module.params['wait']
- wait_timeout = module.params['wait_timeout']
- tags = module.params['tags']
-
- if state == 'present' and not listeners:
- module.fail_json(msg="At least one listener is required for ELB creation")
-
- if state == 'present' and not (zones or subnets):
- module.fail_json(msg="At least one availability zone or subnet is required for ELB creation")
-
- if wait_timeout > 600:
- module.fail_json(msg='wait_timeout maximum is 600 seconds')
-
- if security_group_names:
- security_group_ids = []
- try:
- ec2 = connect_to_aws(boto.ec2, region, **aws_connect_params)
- if subnets: # We have at least one subnet, ergo this is a VPC
- vpc_conn = _get_vpc_connection(module=module, region=region, aws_connect_params=aws_connect_params)
- vpc_id = vpc_conn.get_all_subnets([subnets[0]])[0].vpc_id
- filters = {'vpc_id': vpc_id}
- else:
- filters = None
- grp_details = ec2.get_all_security_groups(filters=filters)
-
- for group_name in security_group_names:
- if isinstance(group_name, string_types):
- group_name = [group_name]
-
- group_id = [str(grp.id) for grp in grp_details if str(grp.name) in group_name]
- security_group_ids.extend(group_id)
- except boto.exception.NoAuthHandlerFound as e:
- module.fail_json_aws(e)
-
- elb_man = ElbManager(module, name, listeners, purge_listeners, zones,
- purge_zones, security_group_ids, health_check,
- subnets, purge_subnets, scheme,
- connection_draining_timeout, idle_timeout,
- cross_az_load_balancing,
- access_logs, stickiness, wait, wait_timeout, tags,
- region=region, instance_ids=instance_ids, purge_instance_ids=purge_instance_ids,
- **aws_connect_params)
-
- # check for unsupported attributes for this version of boto
- if cross_az_load_balancing and not elb_man._check_attribute_support('cross_zone_load_balancing'):
- module.fail_json(msg="You must install boto >= 2.18.0 to use the cross_az_load_balancing attribute")
-
- if connection_draining_timeout and not elb_man._check_attribute_support('connection_draining'):
- module.fail_json(msg="You must install boto >= 2.28.0 to use the connection_draining_timeout attribute")
-
- if idle_timeout and not elb_man._check_attribute_support('connecting_settings'):
- module.fail_json(msg="You must install boto >= 2.33.0 to use the idle_timeout attribute")
-
- if state == 'present':
- elb_man.ensure_ok()
- elif state == 'absent':
- elb_man.ensure_gone()
-
- ansible_facts = {'ec2_elb': 'info'}
- ec2_facts_result = dict(changed=elb_man.changed,
- elb=elb_man.get_info(),
- ansible_facts=ansible_facts)
-
- module.exit_json(**ec2_facts_result)
-
-
-if __name__ == '__main__':
- main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,7 +15,6 @@
- Gather information about ec2 ENI interfaces in AWS.
- This module was called C(ec2_eni_facts) before Ansible 2.9. The usage did not change.
author: "Rob White (@wimnat)"
-requirements: [ boto3 ]
options:
eni_id:
description:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,7 +15,6 @@
- Gather information about ec2 ENI interfaces in AWS.
- This module was called C(ec2_eni_facts) before Ansible 2.9. The usage did not change.
author: "Rob White (@wimnat)"
-requirements: [ boto3 ]
options:
eni_id:
description:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_eni.py 2021-11-12 18:13:53.000000000 +0000
@@ -293,6 +293,8 @@
'''
import time
+from ipaddress import ip_address
+from ipaddress import ip_network
try:
import botocore.exceptions
@@ -302,10 +304,10 @@
from ..module_utils.core import AnsibleAWSModule
from ..module_utils.core import is_boto3_error_code
from ..module_utils.ec2 import AWSRetry
-from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
from ..module_utils.ec2 import get_ec2_security_group_ids_from_names
-from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
-from ..module_utils.ec2 import compare_aws_tags
+from ..module_utils.tagging import boto3_tag_list_to_ansible_dict
+from ..module_utils.tagging import boto3_tag_specifications
+from ..module_utils.ec2 import ensure_ec2_tags
from ..module_utils.waiters import get_waiter
@@ -336,17 +338,11 @@
}
if "TagSet" in interface:
- tags = {}
- name = None
- for tag in interface["TagSet"]:
- tags[tag["Key"]] = tag["Value"]
- if tag["Key"] == "Name":
- name = tag["Value"]
+ tags = boto3_tag_list_to_ansible_dict(interface["TagSet"])
+ if "Name" in tags:
+ interface_info["name"] = tags["Name"]
interface_info["tags"] = tags
- if name is not None:
- interface_info["name"] = name
-
if "Attachment" in interface:
interface_info['attachment'] = {
'attachment_id': interface["Attachment"].get("AttachmentId"),
@@ -429,9 +425,12 @@
secondary_private_ip_addresses = module.params.get("secondary_private_ip_addresses")
secondary_private_ip_address_count = module.params.get("secondary_private_ip_address_count")
changed = False
- tags = module.params.get("tags")
+
+ tags = module.params.get("tags") or dict()
name = module.params.get("name")
- purge_tags = module.params.get("purge_tags")
+ # Make sure that the 'name' parameter sets the Name tag
+ if name:
+ tags['Name'] = name
try:
args = {"SubnetId": subnet_id}
@@ -441,6 +440,18 @@
args["Description"] = description
if len(security_groups) > 0:
args["Groups"] = security_groups
+ if tags:
+ args["TagSpecifications"] = boto3_tag_specifications(tags, types='network-interface')
+
+ # check if provided private_ip_address is within the subnet's address range
+ if private_ip_address:
+ cidr_block = connection.describe_subnets(SubnetIds=[str(subnet_id)])['Subnets'][0]['CidrBlock']
+ valid_private_ip = ip_address(private_ip_address) in ip_network(cidr_block)
+ if not valid_private_ip:
+ module.fail_json(changed=False, msg="Error: cannot create ENI - Address does not fall within the subnet's address range.")
+ if module.check_mode:
+ module.exit_json(changed=True, msg="Would have created ENI if not in check mode.")
+
eni_dict = connection.create_network_interface(aws_retry=True, **args)
eni = eni_dict["NetworkInterface"]
# Once we have an ID make sure we're always modifying the same object
@@ -483,8 +494,6 @@
connection.delete_network_interface(aws_retry=True, NetworkInterfaceId=eni_id)
raise
- manage_tags(eni, name, tags, purge_tags, connection)
-
# Refresh the eni data
eni = describe_eni(connection, module, eni_id)
changed = True
@@ -523,43 +532,47 @@
try:
if description is not None:
if "Description" not in eni or eni["Description"] != description:
- connection.modify_network_interface_attribute(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- Description={'Value': description}
- )
+ if not module.check_mode:
+ connection.modify_network_interface_attribute(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ Description={'Value': description}
+ )
changed = True
if len(security_groups) > 0:
groups = get_ec2_security_group_ids_from_names(security_groups, connection, vpc_id=eni["VpcId"], boto3=True)
if sorted(get_sec_group_list(eni["Groups"])) != sorted(groups):
- connection.modify_network_interface_attribute(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- Groups=groups
- )
+ if not module.check_mode:
+ connection.modify_network_interface_attribute(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ Groups=groups
+ )
changed = True
if source_dest_check is not None:
if "SourceDestCheck" not in eni or eni["SourceDestCheck"] != source_dest_check:
- connection.modify_network_interface_attribute(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- SourceDestCheck={'Value': source_dest_check}
- )
+ if not module.check_mode:
+ connection.modify_network_interface_attribute(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ SourceDestCheck={'Value': source_dest_check}
+ )
changed = True
if delete_on_termination is not None and "Attachment" in eni:
if eni["Attachment"]["DeleteOnTermination"] is not delete_on_termination:
- connection.modify_network_interface_attribute(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- Attachment={'AttachmentId': eni["Attachment"]["AttachmentId"],
- 'DeleteOnTermination': delete_on_termination}
- )
+ if not module.check_mode:
+ connection.modify_network_interface_attribute(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ Attachment={'AttachmentId': eni["Attachment"]["AttachmentId"],
+ 'DeleteOnTermination': delete_on_termination}
+ )
+ if delete_on_termination:
+ waiter = "network_interface_delete_on_terminate"
+ else:
+ waiter = "network_interface_no_delete_on_terminate"
+ get_waiter(connection, waiter).wait(NetworkInterfaceIds=[eni_id])
changed = True
- if delete_on_termination:
- waiter = "network_interface_delete_on_terminate"
- else:
- waiter = "network_interface_no_delete_on_terminate"
- get_waiter(connection, waiter).wait(NetworkInterfaceIds=[eni_id])
current_secondary_addresses = []
if "PrivateIpAddresses" in eni:
@@ -568,86 +581,107 @@
if secondary_private_ip_addresses is not None:
secondary_addresses_to_remove = list(set(current_secondary_addresses) - set(secondary_private_ip_addresses))
if secondary_addresses_to_remove and purge_secondary_private_ip_addresses:
- connection.unassign_private_ip_addresses(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- PrivateIpAddresses=list(set(current_secondary_addresses) - set(secondary_private_ip_addresses)),
- )
- wait_for(absent_ips, connection, secondary_addresses_to_remove, module, eni_id)
+ if not module.check_mode:
+ connection.unassign_private_ip_addresses(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ PrivateIpAddresses=list(set(current_secondary_addresses) - set(secondary_private_ip_addresses)),
+ )
+ wait_for(absent_ips, connection, secondary_addresses_to_remove, module, eni_id)
changed = True
secondary_addresses_to_add = list(set(secondary_private_ip_addresses) - set(current_secondary_addresses))
if secondary_addresses_to_add:
- connection.assign_private_ip_addresses(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- PrivateIpAddresses=secondary_addresses_to_add,
- AllowReassignment=allow_reassignment
- )
- wait_for(correct_ips, connection, secondary_addresses_to_add, module, eni_id)
+ if not module.check_mode:
+ connection.assign_private_ip_addresses(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ PrivateIpAddresses=secondary_addresses_to_add,
+ AllowReassignment=allow_reassignment
+ )
+ wait_for(correct_ips, connection, secondary_addresses_to_add, module, eni_id)
changed = True
if secondary_private_ip_address_count is not None:
current_secondary_address_count = len(current_secondary_addresses)
if secondary_private_ip_address_count > current_secondary_address_count:
- connection.assign_private_ip_addresses(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- SecondaryPrivateIpAddressCount=(secondary_private_ip_address_count - current_secondary_address_count),
- AllowReassignment=allow_reassignment
- )
- wait_for(correct_ip_count, connection, secondary_private_ip_address_count, module, eni_id)
+ if not module.check_mode:
+ connection.assign_private_ip_addresses(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ SecondaryPrivateIpAddressCount=(secondary_private_ip_address_count - current_secondary_address_count),
+ AllowReassignment=allow_reassignment
+ )
+ wait_for(correct_ip_count, connection, secondary_private_ip_address_count, module, eni_id)
changed = True
elif secondary_private_ip_address_count < current_secondary_address_count:
# How many of these addresses do we want to remove
- secondary_addresses_to_remove_count = current_secondary_address_count - secondary_private_ip_address_count
- connection.unassign_private_ip_addresses(
- aws_retry=True,
- NetworkInterfaceId=eni_id,
- PrivateIpAddresses=current_secondary_addresses[:secondary_addresses_to_remove_count]
- )
- wait_for(correct_ip_count, connection, secondary_private_ip_address_count, module, eni_id)
+ if not module.check_mode:
+ secondary_addresses_to_remove_count = current_secondary_address_count - secondary_private_ip_address_count
+ connection.unassign_private_ip_addresses(
+ aws_retry=True,
+ NetworkInterfaceId=eni_id,
+ PrivateIpAddresses=current_secondary_addresses[:secondary_addresses_to_remove_count]
+ )
+ wait_for(correct_ip_count, connection, secondary_private_ip_address_count, module, eni_id)
changed = True
if attached is True:
if "Attachment" in eni and eni["Attachment"]["InstanceId"] != instance_id:
- detach_eni(connection, eni, module)
- connection.attach_network_interface(
- aws_retry=True,
- InstanceId=instance_id,
- DeviceIndex=device_index,
- NetworkInterfaceId=eni_id,
- )
- get_waiter(connection, 'network_interface_attached').wait(NetworkInterfaceIds=[eni_id])
+ if not module.check_mode:
+ detach_eni(connection, eni, module)
+ connection.attach_network_interface(
+ aws_retry=True,
+ InstanceId=instance_id,
+ DeviceIndex=device_index,
+ NetworkInterfaceId=eni_id,
+ )
+ get_waiter(connection, 'network_interface_attached').wait(NetworkInterfaceIds=[eni_id])
changed = True
if "Attachment" not in eni:
- connection.attach_network_interface(
- aws_retry=True,
- InstanceId=instance_id,
- DeviceIndex=device_index,
- NetworkInterfaceId=eni_id,
- )
- get_waiter(connection, 'network_interface_attached').wait(NetworkInterfaceIds=[eni_id])
+ if not module.check_mode:
+ connection.attach_network_interface(
+ aws_retry=True,
+ InstanceId=instance_id,
+ DeviceIndex=device_index,
+ NetworkInterfaceId=eni_id,
+ )
+ get_waiter(connection, 'network_interface_attached').wait(NetworkInterfaceIds=[eni_id])
changed = True
elif attached is False:
changed |= detach_eni(connection, eni, module)
get_waiter(connection, 'network_interface_available').wait(NetworkInterfaceIds=[eni_id])
- changed |= manage_tags(eni, name, tags, purge_tags, connection)
+ changed |= manage_tags(connection, module, eni, name, tags, purge_tags)
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, "Failed to modify eni {0}".format(eni_id))
eni = describe_eni(connection, module, eni_id)
+ if module.check_mode and changed:
+ module.exit_json(changed=changed, msg="Would have modified ENI: {0} if not in check mode".format(eni['NetworkInterfaceId']))
module.exit_json(changed=changed, interface=get_eni_info(eni))
+def _wait_for_detach(connection, module, eni_id):
+ try:
+ get_waiter(connection, 'network_interface_available').wait(
+ NetworkInterfaceIds=[eni_id],
+ WaiterConfig={'Delay': 5, 'MaxAttempts': 80},
+ )
+ except botocore.exceptions.WaiterError as e:
+ module.fail_json_aws(e, "Timeout waiting for ENI {0} to detach".format(eni_id))
+
+
def delete_eni(connection, module):
eni = uniquely_find_eni(connection, module)
if not eni:
module.exit_json(changed=False)
+ if module.check_mode:
+ module.exit_json(changed=True, msg="Would have deleted ENI if not in check mode.")
+
eni_id = eni["NetworkInterfaceId"]
force_detach = module.params.get("force_detach")
@@ -657,10 +691,9 @@
connection.detach_network_interface(
aws_retry=True,
AttachmentId=eni["Attachment"]["AttachmentId"],
- Force=True
+ Force=True,
)
- # Wait to allow detachment to finish
- get_waiter(connection, 'network_interface_available').wait(NetworkInterfaceIds=[eni_id])
+ _wait_for_detach(connection, module, eni_id)
connection.delete_network_interface(aws_retry=True, NetworkInterfaceId=eni_id)
changed = True
else:
@@ -676,6 +709,9 @@
def detach_eni(connection, eni, module):
+ if module.check_mode:
+ module.exit_json(changed=True, msg="Would have detached ENI if not in check mode.")
+
attached = module.params.get("attached")
eni_id = eni["NetworkInterfaceId"]
@@ -684,9 +720,9 @@
connection.detach_network_interface(
aws_retry=True,
AttachmentId=eni["Attachment"]["AttachmentId"],
- Force=force_detach
+ Force=force_detach,
)
- get_waiter(connection, 'network_interface_available').wait(NetworkInterfaceIds=[eni_id])
+ _wait_for_detach(connection, module, eni_id)
return True
return False
@@ -769,7 +805,7 @@
# Build list of remote security groups
remote_security_groups = []
for group in groups:
- remote_security_groups.append(group["GroupId"].encode())
+ remote_security_groups.append(group["GroupId"])
return remote_security_groups
@@ -783,42 +819,18 @@
module.fail_json_aws(e, "Failed to get vpc_id for {0}".format(subnet_id))
-def manage_tags(eni, name, new_tags, purge_tags, connection):
- changed = False
-
- if "TagSet" in eni:
- old_tags = boto3_tag_list_to_ansible_dict(eni['TagSet'])
- elif new_tags:
- old_tags = {}
- else:
- # No new tags and nothing in TagSet
- return False
-
+def manage_tags(connection, module, eni, name, tags, purge_tags):
# Do not purge tags unless tags is not None
- if new_tags is None:
+ if tags is None:
purge_tags = False
- new_tags = {}
+ tags = {}
if name:
- new_tags['Name'] = name
+ tags['Name'] = name
- tags_to_set, tags_to_delete = compare_aws_tags(
- old_tags, new_tags,
- purge_tags=purge_tags,
- )
- if tags_to_set:
- connection.create_tags(
- aws_retry=True,
- Resources=[eni['NetworkInterfaceId']],
- Tags=ansible_dict_to_boto3_tag_list(tags_to_set))
- changed |= True
- if tags_to_delete:
- delete_with_current_values = dict((k, old_tags.get(k)) for k in tags_to_delete)
- connection.delete_tags(
- aws_retry=True,
- Resources=[eni['NetworkInterfaceId']],
- Tags=ansible_dict_to_boto3_tag_list(delete_with_current_values))
- changed |= True
+ eni_id = eni['NetworkInterfaceId']
+
+ changed = ensure_ec2_tags(connection, module, eni_id, tags=tags, purge_tags=purge_tags)
return changed
@@ -853,7 +865,8 @@
required_if=([
('attached', True, ['instance_id']),
('purge_secondary_private_ip_addresses', True, ['secondary_private_ip_addresses'])
- ])
+ ]),
+ supports_check_mode=True,
)
retry_decorator = AWSRetry.jittered_backoff(
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about ec2 security groups in AWS.
- This module was called C(amazon.aws.ec2_group_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author:
- Henrique Rodrigues (@Sodki)
options:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_group_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about ec2 security groups in AWS.
- This module was called C(amazon.aws.ec2_group_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author:
- Henrique Rodrigues (@Sodki)
options:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_group.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_group.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_group.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_group.py 2021-11-12 18:13:53.000000000 +0000
@@ -12,7 +12,6 @@
module: ec2_group
version_added: 1.0.0
author: "Andrew de Quincey (@adq)"
-requirements: [ boto3 ]
short_description: maintain an ec2 VPC security group.
description:
- Maintains ec2 security groups.
@@ -96,10 +95,16 @@
- The IP protocol name (C(tcp), C(udp), C(icmp), C(icmpv6)) or number (U(https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers))
from_port:
type: int
- description: The start of the range of ports that traffic is coming from. A value of C(-1) indicates all ports.
+ description:
+ - The start of the range of ports that traffic is coming from.
+ - A value can be between C(0) to C(65535).
+ - A value of C(-1) indicates all ports (only supported when I(proto=icmp)).
to_port:
type: int
- description: The end of the range of ports that traffic is coming from. A value of C(-1) indicates all ports.
+ description:
+ - The end of the range of ports that traffic is coming from.
+ - A value can be between C(0) to C(65535).
+ - A value of C(-1) indicates all ports (only supported when I(proto=icmp)).
rule_desc:
type: str
description: A description for the rule.
@@ -157,10 +162,16 @@
- The IP protocol name (C(tcp), C(udp), C(icmp), C(icmpv6)) or number (U(https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers))
from_port:
type: int
- description: The start of the range of ports that traffic is going to. A value of C(-1) indicates all ports.
+ description:
+ - The start of the range of ports that traffic is going to.
+ - A value can be between C(0) to C(65535).
+ - A value of C(-1) indicates all ports (only supported when I(proto=icmp)).
to_port:
type: int
- description: The end of the range of ports that traffic is going to. A value of C(-1) indicates all ports.
+ description:
+ - The end of the range of ports that traffic is going to.
+ - A value can be between C(0) to C(65535).
+ - A value of C(-1) indicates all ports (only supported when I(proto=icmp)).
rule_desc:
type: str
description: A description for the rule.
@@ -270,7 +281,7 @@
# the containing group name may be specified here
group_name: example
- proto: all
- # in the 'proto' attribute, if you specify -1, all, or a protocol number other than tcp, udp, icmp, or 58 (ICMPv6),
+ # in the 'proto' attribute, if you specify -1 (only supported when I(proto=icmp)), all, or a protocol number other than tcp, udp, icmp, or 58 (ICMPv6),
# traffic on all ports is allowed, regardless of any ports you specify
from_port: 10050 # this value is ignored
to_port: 10050 # this value is ignored
@@ -390,12 +401,14 @@
returned: on create/update
'''
+import itertools
import json
import re
-import itertools
+from collections import namedtuple
from copy import deepcopy
+from ipaddress import IPv6Network
+from ipaddress import ip_network
from time import sleep
-from collections import namedtuple
try:
from botocore.exceptions import BotoCoreError, ClientError
@@ -408,8 +421,6 @@
from ansible.module_utils.common.network import to_subnet
from ansible.module_utils.six import string_types
-from ..module_utils.compat._ipaddress import IPv6Network
-from ..module_utils.compat._ipaddress import ip_network
from ..module_utils.core import AnsibleAWSModule
from ..module_utils.core import is_boto3_error_code
from ..module_utils.ec2 import AWSRetry
@@ -1023,13 +1034,6 @@
return None, {}
-def verify_rules_with_descriptions_permitted(client, module, rules, rules_egress):
- if not hasattr(client, "update_security_group_rule_descriptions_egress"):
- all_rules = rules if rules else [] + rules_egress if rules_egress else []
- if any('rule_desc' in rule for rule in all_rules):
- module.fail_json(msg="Using rule descriptions requires botocore version >= 1.7.2.")
-
-
def get_diff_final_resource(client, module, security_group):
def get_account_id(security_group, module):
try:
@@ -1209,7 +1213,6 @@
changed = False
client = module.client('ec2', AWSRetry.jittered_backoff())
- verify_rules_with_descriptions_permitted(client, module, rules, rules_egress)
group, groups = group_exists(client, module, vpc_id, group_id, name)
group_created_new = not bool(group)
@@ -1313,7 +1316,7 @@
if purge_rules:
revoke_ingress = []
for p in present_ingress:
- if not any([rule_cmp(p, b) for b in named_tuple_ingress_list]):
+ if not any(rule_cmp(p, b) for b in named_tuple_ingress_list):
revoke_ingress.append(to_permission(p))
else:
revoke_ingress = []
@@ -1326,7 +1329,7 @@
else:
revoke_egress = []
for p in present_egress:
- if not any([rule_cmp(p, b) for b in named_tuple_egress_list]):
+ if not any(rule_cmp(p, b) for b in named_tuple_egress_list):
revoke_egress.append(to_permission(p))
else:
revoke_egress = []
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_facts.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,590 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_instance_info
+version_added: 1.0.0
+short_description: Gather information about ec2 instances in AWS
+description:
+ - Gather information about ec2 instances in AWS
+ - This module was called C(ec2_instance_facts) before Ansible 2.9. The usage did not change.
+author:
+ - Michael Schuett (@michaeljs1990)
+ - Rob White (@wimnat)
+options:
+ instance_ids:
+ description:
+ - If you specify one or more instance IDs, only instances that have the specified IDs are returned.
+ required: false
+ type: list
+ elements: str
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value. See
+ U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html) for possible filters. Filter
+ names and values are case sensitive.
+ required: false
+ default: {}
+ type: dict
+ minimum_uptime:
+ description:
+ - Minimum running uptime in minutes of instances. For example if I(uptime) is C(60) return all instances that have run more than 60 minutes.
+ required: false
+ aliases: ['uptime']
+ type: int
+
+
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Gather information about all instances
+ amazon.aws.ec2_instance_info:
+
+- name: Gather information about all instances in AZ ap-southeast-2a
+ amazon.aws.ec2_instance_info:
+ filters:
+ availability-zone: ap-southeast-2a
+
+- name: Gather information about a particular instance using ID
+ amazon.aws.ec2_instance_info:
+ instance_ids:
+ - i-12345678
+
+- name: Gather information about any instance with a tag key Name and value Example
+ amazon.aws.ec2_instance_info:
+ filters:
+ "tag:Name": Example
+
+- name: Gather information about any instance in states "shutting-down", "stopping", "stopped"
+ amazon.aws.ec2_instance_info:
+ filters:
+ instance-state-name: [ "shutting-down", "stopping", "stopped" ]
+
+- name: Gather information about any instance with Name beginning with RHEL and an uptime of at least 60 minutes
+ amazon.aws.ec2_instance_info:
+ region: "{{ ec2_region }}"
+ uptime: 60
+ filters:
+ "tag:Name": "RHEL-*"
+ instance-state-name: [ "running"]
+ register: ec2_node_info
+
+'''
+
+RETURN = r'''
+instances:
+ description: a list of ec2 instances
+ returned: always
+ type: complex
+ contains:
+ ami_launch_index:
+ description: The AMI launch index, which can be used to find this instance in the launch group.
+ returned: always
+ type: int
+ sample: 0
+ architecture:
+ description: The architecture of the image
+ returned: always
+ type: str
+ sample: x86_64
+ block_device_mappings:
+ description: Any block device mapping entries for the instance.
+ returned: always
+ type: complex
+ contains:
+ device_name:
+ description: The device name exposed to the instance (for example, /dev/sdh or xvdh).
+ returned: always
+ type: str
+ sample: /dev/sdh
+ ebs:
+ description: Parameters used to automatically set up EBS volumes when the instance is launched.
+ returned: always
+ type: complex
+ contains:
+ attach_time:
+ description: The time stamp when the attachment initiated.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ delete_on_termination:
+ description: Indicates whether the volume is deleted on instance termination.
+ returned: always
+ type: bool
+ sample: true
+ status:
+ description: The attachment state.
+ returned: always
+ type: str
+ sample: attached
+ volume_id:
+ description: The ID of the EBS volume
+ returned: always
+ type: str
+ sample: vol-12345678
+ cpu_options:
+ description: The CPU options set for the instance.
+ returned: always
+ type: complex
+ contains:
+ core_count:
+ description: The number of CPU cores for the instance.
+ returned: always
+ type: int
+ sample: 1
+ threads_per_core:
+ description: The number of threads per CPU core. On supported instance, a value of 1 means Intel Hyper-Threading Technology is disabled.
+ returned: always
+ type: int
+ sample: 1
+ client_token:
+ description: The idempotency token you provided when you launched the instance, if applicable.
+ returned: always
+ type: str
+ sample: mytoken
+ ebs_optimized:
+ description: Indicates whether the instance is optimized for EBS I/O.
+ returned: always
+ type: bool
+ sample: false
+ hypervisor:
+ description: The hypervisor type of the instance.
+ returned: always
+ type: str
+ sample: xen
+ iam_instance_profile:
+ description: The IAM instance profile associated with the instance, if applicable.
+ returned: always
+ type: complex
+ contains:
+ arn:
+ description: The Amazon Resource Name (ARN) of the instance profile.
+ returned: always
+ type: str
+ sample: "arn:aws:iam::000012345678:instance-profile/myprofile"
+ id:
+ description: The ID of the instance profile
+ returned: always
+ type: str
+ sample: JFJ397FDG400FG9FD1N
+ image_id:
+ description: The ID of the AMI used to launch the instance.
+ returned: always
+ type: str
+ sample: ami-0011223344
+ instance_id:
+ description: The ID of the instance.
+ returned: always
+ type: str
+ sample: i-012345678
+ instance_type:
+ description: The instance type size of the running instance.
+ returned: always
+ type: str
+ sample: t2.micro
+ key_name:
+ description: The name of the key pair, if this instance was launched with an associated key pair.
+ returned: always
+ type: str
+ sample: my-key
+ launch_time:
+ description: The time the instance was launched.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ monitoring:
+ description: The monitoring for the instance.
+ returned: always
+ type: complex
+ contains:
+ state:
+ description: Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
+ returned: always
+ type: str
+ sample: disabled
+ network_interfaces:
+ description: One or more network interfaces for the instance.
+ returned: always
+ type: complex
+ contains:
+ association:
+ description: The association information for an Elastic IPv4 associated with the network interface.
+ returned: always
+ type: complex
+ contains:
+ ip_owner_id:
+ description: The ID of the owner of the Elastic IP address.
+ returned: always
+ type: str
+ sample: amazon
+ public_dns_name:
+ description: The public DNS name.
+ returned: always
+ type: str
+ sample: ""
+ public_ip:
+ description: The public IP address or Elastic IP address bound to the network interface.
+ returned: always
+ type: str
+ sample: 1.2.3.4
+ attachment:
+ description: The network interface attachment.
+ returned: always
+ type: complex
+ contains:
+ attach_time:
+ description: The time stamp when the attachment initiated.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ attachment_id:
+ description: The ID of the network interface attachment.
+ returned: always
+ type: str
+ sample: eni-attach-3aff3f
+ delete_on_termination:
+ description: Indicates whether the network interface is deleted when the instance is terminated.
+ returned: always
+ type: bool
+ sample: true
+ device_index:
+ description: The index of the device on the instance for the network interface attachment.
+ returned: always
+ type: int
+ sample: 0
+ status:
+ description: The attachment state.
+ returned: always
+ type: str
+ sample: attached
+ description:
+ description: The description.
+ returned: always
+ type: str
+ sample: My interface
+ groups:
+ description: One or more security groups.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ group_id:
+ description: The ID of the security group.
+ returned: always
+ type: str
+ sample: sg-abcdef12
+ group_name:
+ description: The name of the security group.
+ returned: always
+ type: str
+ sample: mygroup
+ ipv6_addresses:
+ description: One or more IPv6 addresses associated with the network interface.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ ipv6_address:
+ description: The IPv6 address.
+ returned: always
+ type: str
+ sample: "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
+ mac_address:
+ description: The MAC address.
+ returned: always
+ type: str
+ sample: "00:11:22:33:44:55"
+ network_interface_id:
+ description: The ID of the network interface.
+ returned: always
+ type: str
+ sample: eni-01234567
+ owner_id:
+ description: The AWS account ID of the owner of the network interface.
+ returned: always
+ type: str
+ sample: 01234567890
+ private_ip_address:
+ description: The IPv4 address of the network interface within the subnet.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ private_ip_addresses:
+ description: The private IPv4 addresses associated with the network interface.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ association:
+ description: The association information for an Elastic IP address (IPv4) associated with the network interface.
+ returned: always
+ type: complex
+ contains:
+ ip_owner_id:
+ description: The ID of the owner of the Elastic IP address.
+ returned: always
+ type: str
+ sample: amazon
+ public_dns_name:
+ description: The public DNS name.
+ returned: always
+ type: str
+ sample: ""
+ public_ip:
+ description: The public IP address or Elastic IP address bound to the network interface.
+ returned: always
+ type: str
+ sample: 1.2.3.4
+ primary:
+ description: Indicates whether this IPv4 address is the primary private IP address of the network interface.
+ returned: always
+ type: bool
+ sample: true
+ private_ip_address:
+ description: The private IPv4 address of the network interface.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ source_dest_check:
+ description: Indicates whether source/destination checking is enabled.
+ returned: always
+ type: bool
+ sample: true
+ status:
+ description: The status of the network interface.
+ returned: always
+ type: str
+ sample: in-use
+ subnet_id:
+ description: The ID of the subnet for the network interface.
+ returned: always
+ type: str
+ sample: subnet-0123456
+ vpc_id:
+ description: The ID of the VPC for the network interface.
+ returned: always
+ type: str
+ sample: vpc-0123456
+ placement:
+ description: The location where the instance launched, if applicable.
+ returned: always
+ type: complex
+ contains:
+ availability_zone:
+ description: The Availability Zone of the instance.
+ returned: always
+ type: str
+ sample: ap-southeast-2a
+ group_name:
+ description: The name of the placement group the instance is in (for cluster compute instances).
+ returned: always
+ type: str
+ sample: ""
+ tenancy:
+ description: The tenancy of the instance (if the instance is running in a VPC).
+ returned: always
+ type: str
+ sample: default
+ private_dns_name:
+ description: The private DNS name.
+ returned: always
+ type: str
+ sample: ip-10-0-0-1.ap-southeast-2.compute.internal
+ private_ip_address:
+ description: The IPv4 address of the network interface within the subnet.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ product_codes:
+ description: One or more product codes.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ product_code_id:
+ description: The product code.
+ returned: always
+ type: str
+ sample: aw0evgkw8ef3n2498gndfgasdfsd5cce
+ product_code_type:
+ description: The type of product code.
+ returned: always
+ type: str
+ sample: marketplace
+ public_dns_name:
+ description: The public DNS name assigned to the instance.
+ returned: always
+ type: str
+ sample:
+ public_ip_address:
+ description: The public IPv4 address assigned to the instance
+ returned: always
+ type: str
+ sample: 52.0.0.1
+ root_device_name:
+ description: The device name of the root device
+ returned: always
+ type: str
+ sample: /dev/sda1
+ root_device_type:
+ description: The type of root device used by the AMI.
+ returned: always
+ type: str
+ sample: ebs
+ security_groups:
+ description: One or more security groups for the instance.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ group_id:
+ description: The ID of the security group.
+ returned: always
+ type: str
+ sample: sg-0123456
+ group_name:
+ description: The name of the security group.
+ returned: always
+ type: str
+ sample: my-security-group
+ source_dest_check:
+ description: Indicates whether source/destination checking is enabled.
+ returned: always
+ type: bool
+ sample: true
+ state:
+ description: The current state of the instance.
+ returned: always
+ type: complex
+ contains:
+ code:
+ description: The low byte represents the state.
+ returned: always
+ type: int
+ sample: 16
+ name:
+ description: The name of the state.
+ returned: always
+ type: str
+ sample: running
+ state_transition_reason:
+ description: The reason for the most recent state transition.
+ returned: always
+ type: str
+ sample:
+ subnet_id:
+ description: The ID of the subnet in which the instance is running.
+ returned: always
+ type: str
+ sample: subnet-00abcdef
+ tags:
+ description: Any tags assigned to the instance.
+ returned: always
+ type: dict
+ sample:
+ virtualization_type:
+ description: The type of virtualization of the AMI.
+ returned: always
+ type: str
+ sample: hvm
+ vpc_id:
+ description: The ID of the VPC the instance is in.
+ returned: always
+ type: dict
+ sample: vpc-0011223344
+'''
+
+import datetime
+
+try:
+ import botocore
+except ImportError:
+ pass # caught by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+
+
+@AWSRetry.jittered_backoff()
+def _describe_instances(connection, **params):
+ paginator = connection.get_paginator('describe_instances')
+ return paginator.paginate(**params).build_full_result()
+
+
+def list_ec2_instances(connection, module):
+
+ instance_ids = module.params.get("instance_ids")
+ uptime = module.params.get('minimum_uptime')
+ filters = ansible_dict_to_boto3_filter_list(module.params.get("filters"))
+
+ try:
+ reservations = _describe_instances(connection, InstanceIds=instance_ids, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Failed to list ec2 instances")
+
+ instances = []
+
+ if uptime:
+ timedelta = int(uptime) if uptime else 0
+ oldest_launch_time = datetime.datetime.utcnow() - datetime.timedelta(minutes=timedelta)
+ # Get instances from reservations
+ for reservation in reservations['Reservations']:
+ instances += [instance for instance in reservation['Instances'] if instance['LaunchTime'].replace(tzinfo=None) < oldest_launch_time]
+ else:
+ for reservation in reservations['Reservations']:
+ instances = instances + reservation['Instances']
+
+ # Turn the boto3 result in to ansible_friendly_snaked_names
+ snaked_instances = [camel_dict_to_snake_dict(instance) for instance in instances]
+
+ # Turn the boto3 result in to ansible friendly tag dictionary
+ for instance in snaked_instances:
+ instance['tags'] = boto3_tag_list_to_ansible_dict(instance.get('tags', []), 'key', 'value')
+
+ module.exit_json(instances=snaked_instances)
+
+
+def main():
+
+ argument_spec = dict(
+ minimum_uptime=dict(required=False, type='int', default=None, aliases=['uptime']),
+ instance_ids=dict(default=[], type='list', elements='str'),
+ filters=dict(default={}, type='dict')
+ )
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ mutually_exclusive=[
+ ['instance_ids', 'filters']
+ ],
+ supports_check_mode=True,
+ )
+ if module._name == 'ec2_instance_facts':
+ module.deprecate("The 'ec2_instance_facts' module has been renamed to 'ec2_instance_info'", date='2021-12-01', collection_name='amazon.aws')
+
+ try:
+ connection = module.client('ec2')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ list_ec2_instances(connection, module)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_info.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,590 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_instance_info
+version_added: 1.0.0
+short_description: Gather information about ec2 instances in AWS
+description:
+ - Gather information about ec2 instances in AWS
+ - This module was called C(ec2_instance_facts) before Ansible 2.9. The usage did not change.
+author:
+ - Michael Schuett (@michaeljs1990)
+ - Rob White (@wimnat)
+options:
+ instance_ids:
+ description:
+ - If you specify one or more instance IDs, only instances that have the specified IDs are returned.
+ required: false
+ type: list
+ elements: str
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value. See
+ U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html) for possible filters. Filter
+ names and values are case sensitive.
+ required: false
+ default: {}
+ type: dict
+ minimum_uptime:
+ description:
+ - Minimum running uptime in minutes of instances. For example if I(uptime) is C(60) return all instances that have run more than 60 minutes.
+ required: false
+ aliases: ['uptime']
+ type: int
+
+
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Gather information about all instances
+ amazon.aws.ec2_instance_info:
+
+- name: Gather information about all instances in AZ ap-southeast-2a
+ amazon.aws.ec2_instance_info:
+ filters:
+ availability-zone: ap-southeast-2a
+
+- name: Gather information about a particular instance using ID
+ amazon.aws.ec2_instance_info:
+ instance_ids:
+ - i-12345678
+
+- name: Gather information about any instance with a tag key Name and value Example
+ amazon.aws.ec2_instance_info:
+ filters:
+ "tag:Name": Example
+
+- name: Gather information about any instance in states "shutting-down", "stopping", "stopped"
+ amazon.aws.ec2_instance_info:
+ filters:
+ instance-state-name: [ "shutting-down", "stopping", "stopped" ]
+
+- name: Gather information about any instance with Name beginning with RHEL and an uptime of at least 60 minutes
+ amazon.aws.ec2_instance_info:
+ region: "{{ ec2_region }}"
+ uptime: 60
+ filters:
+ "tag:Name": "RHEL-*"
+ instance-state-name: [ "running"]
+ register: ec2_node_info
+
+'''
+
+RETURN = r'''
+instances:
+ description: a list of ec2 instances
+ returned: always
+ type: complex
+ contains:
+ ami_launch_index:
+ description: The AMI launch index, which can be used to find this instance in the launch group.
+ returned: always
+ type: int
+ sample: 0
+ architecture:
+ description: The architecture of the image
+ returned: always
+ type: str
+ sample: x86_64
+ block_device_mappings:
+ description: Any block device mapping entries for the instance.
+ returned: always
+ type: complex
+ contains:
+ device_name:
+ description: The device name exposed to the instance (for example, /dev/sdh or xvdh).
+ returned: always
+ type: str
+ sample: /dev/sdh
+ ebs:
+ description: Parameters used to automatically set up EBS volumes when the instance is launched.
+ returned: always
+ type: complex
+ contains:
+ attach_time:
+ description: The time stamp when the attachment initiated.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ delete_on_termination:
+ description: Indicates whether the volume is deleted on instance termination.
+ returned: always
+ type: bool
+ sample: true
+ status:
+ description: The attachment state.
+ returned: always
+ type: str
+ sample: attached
+ volume_id:
+ description: The ID of the EBS volume
+ returned: always
+ type: str
+ sample: vol-12345678
+ cpu_options:
+ description: The CPU options set for the instance.
+ returned: always
+ type: complex
+ contains:
+ core_count:
+ description: The number of CPU cores for the instance.
+ returned: always
+ type: int
+ sample: 1
+ threads_per_core:
+ description: The number of threads per CPU core. On supported instance, a value of 1 means Intel Hyper-Threading Technology is disabled.
+ returned: always
+ type: int
+ sample: 1
+ client_token:
+ description: The idempotency token you provided when you launched the instance, if applicable.
+ returned: always
+ type: str
+ sample: mytoken
+ ebs_optimized:
+ description: Indicates whether the instance is optimized for EBS I/O.
+ returned: always
+ type: bool
+ sample: false
+ hypervisor:
+ description: The hypervisor type of the instance.
+ returned: always
+ type: str
+ sample: xen
+ iam_instance_profile:
+ description: The IAM instance profile associated with the instance, if applicable.
+ returned: always
+ type: complex
+ contains:
+ arn:
+ description: The Amazon Resource Name (ARN) of the instance profile.
+ returned: always
+ type: str
+ sample: "arn:aws:iam::000012345678:instance-profile/myprofile"
+ id:
+ description: The ID of the instance profile
+ returned: always
+ type: str
+ sample: JFJ397FDG400FG9FD1N
+ image_id:
+ description: The ID of the AMI used to launch the instance.
+ returned: always
+ type: str
+ sample: ami-0011223344
+ instance_id:
+ description: The ID of the instance.
+ returned: always
+ type: str
+ sample: i-012345678
+ instance_type:
+ description: The instance type size of the running instance.
+ returned: always
+ type: str
+ sample: t2.micro
+ key_name:
+ description: The name of the key pair, if this instance was launched with an associated key pair.
+ returned: always
+ type: str
+ sample: my-key
+ launch_time:
+ description: The time the instance was launched.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ monitoring:
+ description: The monitoring for the instance.
+ returned: always
+ type: complex
+ contains:
+ state:
+ description: Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
+ returned: always
+ type: str
+ sample: disabled
+ network_interfaces:
+ description: One or more network interfaces for the instance.
+ returned: always
+ type: complex
+ contains:
+ association:
+ description: The association information for an Elastic IPv4 associated with the network interface.
+ returned: always
+ type: complex
+ contains:
+ ip_owner_id:
+ description: The ID of the owner of the Elastic IP address.
+ returned: always
+ type: str
+ sample: amazon
+ public_dns_name:
+ description: The public DNS name.
+ returned: always
+ type: str
+ sample: ""
+ public_ip:
+ description: The public IP address or Elastic IP address bound to the network interface.
+ returned: always
+ type: str
+ sample: 1.2.3.4
+ attachment:
+ description: The network interface attachment.
+ returned: always
+ type: complex
+ contains:
+ attach_time:
+ description: The time stamp when the attachment initiated.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ attachment_id:
+ description: The ID of the network interface attachment.
+ returned: always
+ type: str
+ sample: eni-attach-3aff3f
+ delete_on_termination:
+ description: Indicates whether the network interface is deleted when the instance is terminated.
+ returned: always
+ type: bool
+ sample: true
+ device_index:
+ description: The index of the device on the instance for the network interface attachment.
+ returned: always
+ type: int
+ sample: 0
+ status:
+ description: The attachment state.
+ returned: always
+ type: str
+ sample: attached
+ description:
+ description: The description.
+ returned: always
+ type: str
+ sample: My interface
+ groups:
+ description: One or more security groups.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ group_id:
+ description: The ID of the security group.
+ returned: always
+ type: str
+ sample: sg-abcdef12
+ group_name:
+ description: The name of the security group.
+ returned: always
+ type: str
+ sample: mygroup
+ ipv6_addresses:
+ description: One or more IPv6 addresses associated with the network interface.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ ipv6_address:
+ description: The IPv6 address.
+ returned: always
+ type: str
+ sample: "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
+ mac_address:
+ description: The MAC address.
+ returned: always
+ type: str
+ sample: "00:11:22:33:44:55"
+ network_interface_id:
+ description: The ID of the network interface.
+ returned: always
+ type: str
+ sample: eni-01234567
+ owner_id:
+ description: The AWS account ID of the owner of the network interface.
+ returned: always
+ type: str
+ sample: 01234567890
+ private_ip_address:
+ description: The IPv4 address of the network interface within the subnet.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ private_ip_addresses:
+ description: The private IPv4 addresses associated with the network interface.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ association:
+ description: The association information for an Elastic IP address (IPv4) associated with the network interface.
+ returned: always
+ type: complex
+ contains:
+ ip_owner_id:
+ description: The ID of the owner of the Elastic IP address.
+ returned: always
+ type: str
+ sample: amazon
+ public_dns_name:
+ description: The public DNS name.
+ returned: always
+ type: str
+ sample: ""
+ public_ip:
+ description: The public IP address or Elastic IP address bound to the network interface.
+ returned: always
+ type: str
+ sample: 1.2.3.4
+ primary:
+ description: Indicates whether this IPv4 address is the primary private IP address of the network interface.
+ returned: always
+ type: bool
+ sample: true
+ private_ip_address:
+ description: The private IPv4 address of the network interface.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ source_dest_check:
+ description: Indicates whether source/destination checking is enabled.
+ returned: always
+ type: bool
+ sample: true
+ status:
+ description: The status of the network interface.
+ returned: always
+ type: str
+ sample: in-use
+ subnet_id:
+ description: The ID of the subnet for the network interface.
+ returned: always
+ type: str
+ sample: subnet-0123456
+ vpc_id:
+ description: The ID of the VPC for the network interface.
+ returned: always
+ type: str
+ sample: vpc-0123456
+ placement:
+ description: The location where the instance launched, if applicable.
+ returned: always
+ type: complex
+ contains:
+ availability_zone:
+ description: The Availability Zone of the instance.
+ returned: always
+ type: str
+ sample: ap-southeast-2a
+ group_name:
+ description: The name of the placement group the instance is in (for cluster compute instances).
+ returned: always
+ type: str
+ sample: ""
+ tenancy:
+ description: The tenancy of the instance (if the instance is running in a VPC).
+ returned: always
+ type: str
+ sample: default
+ private_dns_name:
+ description: The private DNS name.
+ returned: always
+ type: str
+ sample: ip-10-0-0-1.ap-southeast-2.compute.internal
+ private_ip_address:
+ description: The IPv4 address of the network interface within the subnet.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ product_codes:
+ description: One or more product codes.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ product_code_id:
+ description: The product code.
+ returned: always
+ type: str
+ sample: aw0evgkw8ef3n2498gndfgasdfsd5cce
+ product_code_type:
+ description: The type of product code.
+ returned: always
+ type: str
+ sample: marketplace
+ public_dns_name:
+ description: The public DNS name assigned to the instance.
+ returned: always
+ type: str
+ sample:
+ public_ip_address:
+ description: The public IPv4 address assigned to the instance
+ returned: always
+ type: str
+ sample: 52.0.0.1
+ root_device_name:
+ description: The device name of the root device
+ returned: always
+ type: str
+ sample: /dev/sda1
+ root_device_type:
+ description: The type of root device used by the AMI.
+ returned: always
+ type: str
+ sample: ebs
+ security_groups:
+ description: One or more security groups for the instance.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ group_id:
+ description: The ID of the security group.
+ returned: always
+ type: str
+ sample: sg-0123456
+ group_name:
+ description: The name of the security group.
+ returned: always
+ type: str
+ sample: my-security-group
+ source_dest_check:
+ description: Indicates whether source/destination checking is enabled.
+ returned: always
+ type: bool
+ sample: true
+ state:
+ description: The current state of the instance.
+ returned: always
+ type: complex
+ contains:
+ code:
+ description: The low byte represents the state.
+ returned: always
+ type: int
+ sample: 16
+ name:
+ description: The name of the state.
+ returned: always
+ type: str
+ sample: running
+ state_transition_reason:
+ description: The reason for the most recent state transition.
+ returned: always
+ type: str
+ sample:
+ subnet_id:
+ description: The ID of the subnet in which the instance is running.
+ returned: always
+ type: str
+ sample: subnet-00abcdef
+ tags:
+ description: Any tags assigned to the instance.
+ returned: always
+ type: dict
+ sample:
+ virtualization_type:
+ description: The type of virtualization of the AMI.
+ returned: always
+ type: str
+ sample: hvm
+ vpc_id:
+ description: The ID of the VPC the instance is in.
+ returned: always
+ type: dict
+ sample: vpc-0011223344
+'''
+
+import datetime
+
+try:
+ import botocore
+except ImportError:
+ pass # caught by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+
+
+@AWSRetry.jittered_backoff()
+def _describe_instances(connection, **params):
+ paginator = connection.get_paginator('describe_instances')
+ return paginator.paginate(**params).build_full_result()
+
+
+def list_ec2_instances(connection, module):
+
+ instance_ids = module.params.get("instance_ids")
+ uptime = module.params.get('minimum_uptime')
+ filters = ansible_dict_to_boto3_filter_list(module.params.get("filters"))
+
+ try:
+ reservations = _describe_instances(connection, InstanceIds=instance_ids, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Failed to list ec2 instances")
+
+ instances = []
+
+ if uptime:
+ timedelta = int(uptime) if uptime else 0
+ oldest_launch_time = datetime.datetime.utcnow() - datetime.timedelta(minutes=timedelta)
+ # Get instances from reservations
+ for reservation in reservations['Reservations']:
+ instances += [instance for instance in reservation['Instances'] if instance['LaunchTime'].replace(tzinfo=None) < oldest_launch_time]
+ else:
+ for reservation in reservations['Reservations']:
+ instances = instances + reservation['Instances']
+
+ # Turn the boto3 result in to ansible_friendly_snaked_names
+ snaked_instances = [camel_dict_to_snake_dict(instance) for instance in instances]
+
+ # Turn the boto3 result in to ansible friendly tag dictionary
+ for instance in snaked_instances:
+ instance['tags'] = boto3_tag_list_to_ansible_dict(instance.get('tags', []), 'key', 'value')
+
+ module.exit_json(instances=snaked_instances)
+
+
+def main():
+
+ argument_spec = dict(
+ minimum_uptime=dict(required=False, type='int', default=None, aliases=['uptime']),
+ instance_ids=dict(default=[], type='list', elements='str'),
+ filters=dict(default={}, type='dict')
+ )
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ mutually_exclusive=[
+ ['instance_ids', 'filters']
+ ],
+ supports_check_mode=True,
+ )
+ if module._name == 'ec2_instance_facts':
+ module.deprecate("The 'ec2_instance_facts' module has been renamed to 'ec2_instance_info'", date='2021-12-01', collection_name='amazon.aws')
+
+ try:
+ connection = module.client('ec2')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ list_ec2_instances(connection, module)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_instance.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,1910 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_instance
+version_added: 1.0.0
+short_description: Create & manage EC2 instances
+description:
+ - Create and manage AWS EC2 instances.
+ - >
+ Note: This module does not support creating
+ L(EC2 Spot instances,https://aws.amazon.com/ec2/spot/). The M(amazon.aws.ec2) module
+ can create and manage spot instances.
+author:
+ - Ryan Scott Brown (@ryansb)
+options:
+ instance_ids:
+ description:
+ - If you specify one or more instance IDs, only instances that have the specified IDs are returned.
+ type: list
+ elements: str
+ state:
+ description:
+ - Goal state for the instances.
+ - "I(state=present): ensures instances exist, but does not guarantee any state (e.g. running). Newly-launched instances will be run by EC2."
+ - "I(state=running): I(state=present) + ensures the instances are running"
+ - "I(state=started): I(state=running) + waits for EC2 status checks to report OK if I(wait=true)"
+ - "I(state=stopped): ensures an existing instance is stopped."
+ - "I(state=rebooted): convenience alias for I(state=stopped) immediately followed by I(state=running)"
+ - "I(state=restarted): convenience alias for I(state=stopped) immediately followed by I(state=started)"
+ - "I(state=terminated): ensures an existing instance is terminated."
+ - "I(state=absent): alias for I(state=terminated)"
+ choices: [present, terminated, running, started, stopped, restarted, rebooted, absent]
+ default: present
+ type: str
+ wait:
+ description:
+ - Whether or not to wait for the desired state (use wait_timeout to customize this).
+ default: true
+ type: bool
+ wait_timeout:
+ description:
+ - How long to wait (in seconds) for the instance to finish booting/terminating.
+ default: 600
+ type: int
+ instance_type:
+ description:
+ - Instance type to use for the instance, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html)
+ Only required when instance is not already present.
+ default: t2.micro
+ type: str
+ user_data:
+ description:
+ - Opaque blob of data which is made available to the ec2 instance
+ type: str
+ tower_callback:
+ description:
+ - Preconfigured user-data to enable an instance to perform a Tower callback (Linux only).
+ - Mutually exclusive with I(user_data).
+ - For Windows instances, to enable remote access via Ansible set I(tower_callback.windows) to true, and optionally set an admin password.
+ - If using 'windows' and 'set_password', callback to Tower will not be performed but the instance will be ready to receive winrm connections from Ansible.
+ type: dict
+ suboptions:
+ tower_address:
+ description:
+ - IP address or DNS name of Tower server. Must be accessible via this address from the VPC that this instance will be launched in.
+ type: str
+ job_template_id:
+ description:
+ - Either the integer ID of the Tower Job Template, or the name (name supported only for Tower 3.2+).
+ type: str
+ host_config_key:
+ description:
+ - Host configuration secret key generated by the Tower job template.
+ type: str
+ tags:
+ description:
+ - A hash/dictionary of tags to add to the new instance or to add/remove from an existing one.
+ type: dict
+ purge_tags:
+ description:
+ - Delete any tags not specified in the task that are on the instance.
+ This means you have to specify all the desired tags on each task affecting an instance.
+ default: false
+ type: bool
+ image:
+ description:
+ - An image to use for the instance. The M(amazon.aws.ec2_ami_info) module may be used to retrieve images.
+ One of I(image) or I(image_id) are required when instance is not already present.
+ type: dict
+ suboptions:
+ id:
+ description:
+ - The AMI ID.
+ type: str
+ ramdisk:
+ description:
+ - Overrides the AMI's default ramdisk ID.
+ type: str
+ kernel:
+ description:
+ - a string AKI to override the AMI kernel.
+ image_id:
+ description:
+ - I(ami) ID to use for the instance. One of I(image) or I(image_id) are required when instance is not already present.
+ - This is an alias for I(image.id).
+ type: str
+ security_groups:
+ description:
+ - A list of security group IDs or names (strings). Mutually exclusive with I(security_group).
+ type: list
+ elements: str
+ security_group:
+ description:
+ - A security group ID or name. Mutually exclusive with I(security_groups).
+ type: str
+ name:
+ description:
+ - The Name tag for the instance.
+ type: str
+ vpc_subnet_id:
+ description:
+ - The subnet ID in which to launch the instance (VPC)
+ If none is provided, M(amazon.aws.ec2_instance) will chose the default zone of the default VPC.
+ aliases: ['subnet_id']
+ type: str
+ network:
+ description:
+ - Either a dictionary containing the key 'interfaces' corresponding to a list of network interface IDs or
+ containing specifications for a single network interface.
+ - Use the M(amazon.aws.ec2_eni) module to create ENIs with special settings.
+ type: dict
+ suboptions:
+ interfaces:
+ description:
+ - a list of ENI IDs (strings) or a list of objects containing the key I(id).
+ type: list
+ assign_public_ip:
+ description:
+ - when true assigns a public IP address to the interface
+ type: bool
+ private_ip_address:
+ description:
+ - an IPv4 address to assign to the interface
+ type: str
+ ipv6_addresses:
+ description:
+ - a list of IPv6 addresses to assign to the network interface
+ type: list
+ source_dest_check:
+ description:
+ - controls whether source/destination checking is enabled on the interface
+ type: bool
+ description:
+ description:
+ - a description for the network interface
+ type: str
+ private_ip_addresses:
+ description:
+ - a list of IPv4 addresses to assign to the network interface
+ type: list
+ subnet_id:
+ description:
+ - the subnet to connect the network interface to
+ type: str
+ delete_on_termination:
+ description:
+ - Delete the interface when the instance it is attached to is
+ terminated.
+ type: bool
+ device_index:
+ description:
+ - The index of the interface to modify
+ type: int
+ groups:
+ description:
+ - a list of security group IDs to attach to the interface
+ type: list
+ volumes:
+ description:
+ - A list of block device mappings, by default this will always use the AMI root device so the volumes option is primarily for adding more storage.
+ - A mapping contains the (optional) keys device_name, virtual_name, ebs.volume_type, ebs.volume_size, ebs.kms_key_id,
+ ebs.iops, and ebs.delete_on_termination.
+ - Set ebs.throughput value requires botocore>=1.19.27.
+ - For more information about each parameter, see U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html).
+ type: list
+ elements: dict
+ launch_template:
+ description:
+ - The EC2 launch template to base instance configuration on.
+ type: dict
+ suboptions:
+ id:
+ description:
+ - the ID of the launch template (optional if name is specified).
+ type: str
+ name:
+ description:
+ - the pretty name of the launch template (optional if id is specified).
+ type: str
+ version:
+ description:
+ - the specific version of the launch template to use. If unspecified, the template default is chosen.
+ key_name:
+ description:
+ - Name of the SSH access key to assign to the instance - must exist in the region the instance is created.
+ type: str
+ availability_zone:
+ description:
+ - Specify an availability zone to use the default subnet it. Useful if not specifying the I(vpc_subnet_id) parameter.
+ - If no subnet, ENI, or availability zone is provided, the default subnet in the default VPC will be used in the first AZ (alphabetically sorted).
+ type: str
+ instance_initiated_shutdown_behavior:
+ description:
+ - Whether to stop or terminate an instance upon shutdown.
+ choices: ['stop', 'terminate']
+ type: str
+ tenancy:
+ description:
+ - What type of tenancy to allow an instance to use. Default is shared tenancy. Dedicated tenancy will incur additional charges.
+ choices: ['dedicated', 'default']
+ type: str
+ termination_protection:
+ description:
+ - Whether to enable termination protection.
+ This module will not terminate an instance with termination protection active, it must be turned off first.
+ type: bool
+ cpu_credit_specification:
+ description:
+ - For T series instances, choose whether to allow increased charges to buy CPU credits if the default pool is depleted.
+ - Choose I(unlimited) to enable buying additional CPU credits.
+ choices: ['unlimited', 'standard']
+ type: str
+ cpu_options:
+ description:
+ - Reduce the number of vCPU exposed to the instance.
+ - Those parameters can only be set at instance launch. The two suboptions threads_per_core and core_count are mandatory.
+ - See U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-optimize-cpu.html) for combinations available.
+ type: dict
+ suboptions:
+ threads_per_core:
+ description:
+ - Select the number of threads per core to enable. Disable or Enable Intel HT.
+ choices: [1, 2]
+ required: true
+ type: int
+ core_count:
+ description:
+ - Set the number of core to enable.
+ required: true
+ type: int
+ detailed_monitoring:
+ description:
+ - Whether to allow detailed cloudwatch metrics to be collected, enabling more detailed alerting.
+ type: bool
+ ebs_optimized:
+ description:
+ - Whether instance is should use optimized EBS volumes, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
+ type: bool
+ filters:
+ description:
+ - A dict of filters to apply when deciding whether existing instances match and should be altered. Each dict item
+ consists of a filter key and a filter value. See
+ U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html).
+ for possible filters. Filter names and values are case sensitive.
+ - By default, instances are filtered for counting by their "Name" tag, base AMI, state (running, by default), and
+ subnet ID. Any queryable filter can be used. Good candidates are specific tags, SSH keys, or security groups.
+ type: dict
+ instance_role:
+ description:
+ - The ARN or name of an EC2-enabled instance role to be used. If a name is not provided in arn format
+ then the ListInstanceProfiles permission must also be granted.
+ U(https://docs.aws.amazon.com/IAM/latest/APIReference/API_ListInstanceProfiles.html) If no full ARN is provided,
+ the role with a matching name will be used from the active AWS account.
+ type: str
+ placement_group:
+ description:
+ - The placement group that needs to be assigned to the instance
+ type: str
+ metadata_options:
+ description:
+ - Modify the metadata options for the instance.
+ - See U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) for more information.
+ - The two suboptions I(http_endpoint) and I(http_tokens) are supported.
+ type: dict
+ version_added: 2.0.0
+ suboptions:
+ http_endpoint:
+ description:
+ - Enables or disables the HTTP metadata endpoint on instances.
+ - If specified a value of disabled, metadata of the instance will not be accessible.
+ choices: [enabled, disabled]
+ default: enabled
+ type: str
+ http_tokens:
+ description:
+ - Set the state of token usage for instance metadata requests.
+ - If the state is optional (v1 and v2), instance metadata can be retrieved with or without a signed token header on request.
+ - If the state is required (v2), a signed token header must be sent with any instance metadata retrieval requests.
+ choices: [optional, required]
+ default: optional
+ type: str
+
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = '''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Terminate every running instance in a region. Use with EXTREME caution.
+ amazon.aws.ec2_instance:
+ state: absent
+ filters:
+ instance-state-name: running
+
+- name: restart a particular instance by its ID
+ amazon.aws.ec2_instance:
+ state: restarted
+ instance_ids:
+ - i-12345678
+
+- name: start an instance with a public IP address
+ amazon.aws.ec2_instance:
+ name: "public-compute-instance"
+ key_name: "prod-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ instance_type: c5.large
+ security_group: default
+ network:
+ assign_public_ip: true
+ image_id: ami-123456
+ tags:
+ Environment: Testing
+
+- name: start an instance and Add EBS
+ amazon.aws.ec2_instance:
+ name: "public-withebs-instance"
+ vpc_subnet_id: subnet-5ca1ab1e
+ instance_type: t2.micro
+ key_name: "prod-ssh-key"
+ security_group: default
+ volumes:
+ - device_name: /dev/sda1
+ ebs:
+ volume_size: 16
+ delete_on_termination: true
+
+- name: start an instance with a cpu_options
+ amazon.aws.ec2_instance:
+ name: "public-cpuoption-instance"
+ vpc_subnet_id: subnet-5ca1ab1e
+ tags:
+ Environment: Testing
+ instance_type: c4.large
+ volumes:
+ - device_name: /dev/sda1
+ ebs:
+ delete_on_termination: true
+ cpu_options:
+ core_count: 1
+ threads_per_core: 1
+
+- name: start an instance and have it begin a Tower callback on boot
+ amazon.aws.ec2_instance:
+ name: "tower-callback-test"
+ key_name: "prod-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ security_group: default
+ tower_callback:
+ # IP or hostname of tower server
+ tower_address: 1.2.3.4
+ job_template_id: 876
+ host_config_key: '[secret config key goes here]'
+ network:
+ assign_public_ip: true
+ image_id: ami-123456
+ cpu_credit_specification: unlimited
+ tags:
+ SomeThing: "A value"
+
+- name: start an instance with ENI (An existing ENI ID is required)
+ amazon.aws.ec2_instance:
+ name: "public-eni-instance"
+ key_name: "prod-ssh-key"
+ vpc_subnet_id: subnet-5ca1ab1e
+ network:
+ interfaces:
+ - id: "eni-12345"
+ tags:
+ Env: "eni_on"
+ volumes:
+ - device_name: /dev/sda1
+ ebs:
+ delete_on_termination: true
+ instance_type: t2.micro
+ image_id: ami-123456
+
+- name: add second ENI interface
+ amazon.aws.ec2_instance:
+ name: "public-eni-instance"
+ network:
+ interfaces:
+ - id: "eni-12345"
+ - id: "eni-67890"
+ image_id: ami-123456
+ tags:
+ Env: "eni_on"
+ instance_type: t2.micro
+- name: start an instance with metadata options
+ amazon.aws.ec2_instance:
+ name: "public-metadataoptions-instance"
+ vpc_subnet_id: subnet-5calable
+ instance_type: t3.small
+ image_id: ami-123456
+ tags:
+ Environment: Testing
+ metadata_options:
+ http_endpoint: enabled
+ http_tokens: optional
+'''
+
+RETURN = '''
+instances:
+ description: a list of ec2 instances
+ returned: when wait == true
+ type: complex
+ contains:
+ ami_launch_index:
+ description: The AMI launch index, which can be used to find this instance in the launch group.
+ returned: always
+ type: int
+ sample: 0
+ architecture:
+ description: The architecture of the image
+ returned: always
+ type: str
+ sample: x86_64
+ block_device_mappings:
+ description: Any block device mapping entries for the instance.
+ returned: always
+ type: complex
+ contains:
+ device_name:
+ description: The device name exposed to the instance (for example, /dev/sdh or xvdh).
+ returned: always
+ type: str
+ sample: /dev/sdh
+ ebs:
+ description: Parameters used to automatically set up EBS volumes when the instance is launched.
+ returned: always
+ type: complex
+ contains:
+ attach_time:
+ description: The time stamp when the attachment initiated.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ delete_on_termination:
+ description: Indicates whether the volume is deleted on instance termination.
+ returned: always
+ type: bool
+ sample: true
+ status:
+ description: The attachment state.
+ returned: always
+ type: str
+ sample: attached
+ volume_id:
+ description: The ID of the EBS volume
+ returned: always
+ type: str
+ sample: vol-12345678
+ client_token:
+ description: The idempotency token you provided when you launched the instance, if applicable.
+ returned: always
+ type: str
+ sample: mytoken
+ ebs_optimized:
+ description: Indicates whether the instance is optimized for EBS I/O.
+ returned: always
+ type: bool
+ sample: false
+ hypervisor:
+ description: The hypervisor type of the instance.
+ returned: always
+ type: str
+ sample: xen
+ iam_instance_profile:
+ description: The IAM instance profile associated with the instance, if applicable.
+ returned: always
+ type: complex
+ contains:
+ arn:
+ description: The Amazon Resource Name (ARN) of the instance profile.
+ returned: always
+ type: str
+ sample: "arn:aws:iam::000012345678:instance-profile/myprofile"
+ id:
+ description: The ID of the instance profile
+ returned: always
+ type: str
+ sample: JFJ397FDG400FG9FD1N
+ image_id:
+ description: The ID of the AMI used to launch the instance.
+ returned: always
+ type: str
+ sample: ami-0011223344
+ instance_id:
+ description: The ID of the instance.
+ returned: always
+ type: str
+ sample: i-012345678
+ instance_type:
+ description: The instance type size of the running instance.
+ returned: always
+ type: str
+ sample: t2.micro
+ key_name:
+ description: The name of the key pair, if this instance was launched with an associated key pair.
+ returned: always
+ type: str
+ sample: my-key
+ launch_time:
+ description: The time the instance was launched.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ monitoring:
+ description: The monitoring for the instance.
+ returned: always
+ type: complex
+ contains:
+ state:
+ description: Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
+ returned: always
+ type: str
+ sample: disabled
+ network_interfaces:
+ description: One or more network interfaces for the instance.
+ returned: always
+ type: complex
+ contains:
+ association:
+ description: The association information for an Elastic IPv4 associated with the network interface.
+ returned: always
+ type: complex
+ contains:
+ ip_owner_id:
+ description: The ID of the owner of the Elastic IP address.
+ returned: always
+ type: str
+ sample: amazon
+ public_dns_name:
+ description: The public DNS name.
+ returned: always
+ type: str
+ sample: ""
+ public_ip:
+ description: The public IP address or Elastic IP address bound to the network interface.
+ returned: always
+ type: str
+ sample: 1.2.3.4
+ attachment:
+ description: The network interface attachment.
+ returned: always
+ type: complex
+ contains:
+ attach_time:
+ description: The time stamp when the attachment initiated.
+ returned: always
+ type: str
+ sample: "2017-03-23T22:51:24+00:00"
+ attachment_id:
+ description: The ID of the network interface attachment.
+ returned: always
+ type: str
+ sample: eni-attach-3aff3f
+ delete_on_termination:
+ description: Indicates whether the network interface is deleted when the instance is terminated.
+ returned: always
+ type: bool
+ sample: true
+ device_index:
+ description: The index of the device on the instance for the network interface attachment.
+ returned: always
+ type: int
+ sample: 0
+ status:
+ description: The attachment state.
+ returned: always
+ type: str
+ sample: attached
+ description:
+ description: The description.
+ returned: always
+ type: str
+ sample: My interface
+ groups:
+ description: One or more security groups.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ group_id:
+ description: The ID of the security group.
+ returned: always
+ type: str
+ sample: sg-abcdef12
+ group_name:
+ description: The name of the security group.
+ returned: always
+ type: str
+ sample: mygroup
+ ipv6_addresses:
+ description: One or more IPv6 addresses associated with the network interface.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ ipv6_address:
+ description: The IPv6 address.
+ returned: always
+ type: str
+ sample: "2001:0db8:85a3:0000:0000:8a2e:0370:7334"
+ mac_address:
+ description: The MAC address.
+ returned: always
+ type: str
+ sample: "00:11:22:33:44:55"
+ network_interface_id:
+ description: The ID of the network interface.
+ returned: always
+ type: str
+ sample: eni-01234567
+ owner_id:
+ description: The AWS account ID of the owner of the network interface.
+ returned: always
+ type: str
+ sample: 01234567890
+ private_ip_address:
+ description: The IPv4 address of the network interface within the subnet.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ private_ip_addresses:
+ description: The private IPv4 addresses associated with the network interface.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ association:
+ description: The association information for an Elastic IP address (IPv4) associated with the network interface.
+ returned: always
+ type: complex
+ contains:
+ ip_owner_id:
+ description: The ID of the owner of the Elastic IP address.
+ returned: always
+ type: str
+ sample: amazon
+ public_dns_name:
+ description: The public DNS name.
+ returned: always
+ type: str
+ sample: ""
+ public_ip:
+ description: The public IP address or Elastic IP address bound to the network interface.
+ returned: always
+ type: str
+ sample: 1.2.3.4
+ primary:
+ description: Indicates whether this IPv4 address is the primary private IP address of the network interface.
+ returned: always
+ type: bool
+ sample: true
+ private_ip_address:
+ description: The private IPv4 address of the network interface.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ source_dest_check:
+ description: Indicates whether source/destination checking is enabled.
+ returned: always
+ type: bool
+ sample: true
+ status:
+ description: The status of the network interface.
+ returned: always
+ type: str
+ sample: in-use
+ subnet_id:
+ description: The ID of the subnet for the network interface.
+ returned: always
+ type: str
+ sample: subnet-0123456
+ vpc_id:
+ description: The ID of the VPC for the network interface.
+ returned: always
+ type: str
+ sample: vpc-0123456
+ placement:
+ description: The location where the instance launched, if applicable.
+ returned: always
+ type: complex
+ contains:
+ availability_zone:
+ description: The Availability Zone of the instance.
+ returned: always
+ type: str
+ sample: ap-southeast-2a
+ group_name:
+ description: The name of the placement group the instance is in (for cluster compute instances).
+ returned: always
+ type: str
+ sample: ""
+ tenancy:
+ description: The tenancy of the instance (if the instance is running in a VPC).
+ returned: always
+ type: str
+ sample: default
+ private_dns_name:
+ description: The private DNS name.
+ returned: always
+ type: str
+ sample: ip-10-0-0-1.ap-southeast-2.compute.internal
+ private_ip_address:
+ description: The IPv4 address of the network interface within the subnet.
+ returned: always
+ type: str
+ sample: 10.0.0.1
+ product_codes:
+ description: One or more product codes.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ product_code_id:
+ description: The product code.
+ returned: always
+ type: str
+ sample: aw0evgkw8ef3n2498gndfgasdfsd5cce
+ product_code_type:
+ description: The type of product code.
+ returned: always
+ type: str
+ sample: marketplace
+ public_dns_name:
+ description: The public DNS name assigned to the instance.
+ returned: always
+ type: str
+ sample:
+ public_ip_address:
+ description: The public IPv4 address assigned to the instance
+ returned: always
+ type: str
+ sample: 52.0.0.1
+ root_device_name:
+ description: The device name of the root device
+ returned: always
+ type: str
+ sample: /dev/sda1
+ root_device_type:
+ description: The type of root device used by the AMI.
+ returned: always
+ type: str
+ sample: ebs
+ security_groups:
+ description: One or more security groups for the instance.
+ returned: always
+ type: list
+ elements: dict
+ contains:
+ group_id:
+ description: The ID of the security group.
+ returned: always
+ type: str
+ sample: sg-0123456
+ group_name:
+ description: The name of the security group.
+ returned: always
+ type: str
+ sample: my-security-group
+ network.source_dest_check:
+ description: Indicates whether source/destination checking is enabled.
+ returned: always
+ type: bool
+ sample: true
+ state:
+ description: The current state of the instance.
+ returned: always
+ type: complex
+ contains:
+ code:
+ description: The low byte represents the state.
+ returned: always
+ type: int
+ sample: 16
+ name:
+ description: The name of the state.
+ returned: always
+ type: str
+ sample: running
+ state_transition_reason:
+ description: The reason for the most recent state transition.
+ returned: always
+ type: str
+ sample:
+ subnet_id:
+ description: The ID of the subnet in which the instance is running.
+ returned: always
+ type: str
+ sample: subnet-00abcdef
+ tags:
+ description: Any tags assigned to the instance.
+ returned: always
+ type: dict
+ sample:
+ virtualization_type:
+ description: The type of virtualization of the AMI.
+ returned: always
+ type: str
+ sample: hvm
+ vpc_id:
+ description: The ID of the VPC the instance is in.
+ returned: always
+ type: dict
+ sample: vpc-0011223344
+'''
+
+from collections import namedtuple
+import re
+import string
+import textwrap
+import time
+import uuid
+
+try:
+ import botocore
+except ImportError:
+ pass # caught by AnsibleAWSModule
+
+from ansible.module_utils._text import to_native
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+from ansible.module_utils.common.dict_transformations import snake_dict_to_camel_dict
+from ansible.module_utils.six import string_types
+from ansible.module_utils.six.moves.urllib import parse as urlparse
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_message
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ensure_ec2_tags
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import get_ec2_security_group_ids_from_names
+from ansible_collections.amazon.aws.plugins.module_utils.tagging import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.tagging import boto3_tag_specifications
+
+module = None
+
+
+def tower_callback_script(tower_conf, windows=False, passwd=None):
+ script_url = 'https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1'
+ if windows and passwd is not None:
+ script_tpl = """
+ $admin = [adsi]("WinNT://./administrator, user")
+ $admin.PSBase.Invoke("SetPassword", "{PASS}")
+ Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('{SCRIPT}'))
+
+ """
+ return to_native(textwrap.dedent(script_tpl).format(PASS=passwd, SCRIPT=script_url))
+ elif windows and passwd is None:
+ script_tpl = """
+ $admin = [adsi]("WinNT://./administrator, user")
+ Invoke-Expression ((New-Object System.Net.Webclient).DownloadString('{SCRIPT}'))
+
+ """
+ return to_native(textwrap.dedent(script_tpl).format(PASS=passwd, SCRIPT=script_url))
+ elif not windows:
+ for p in ['tower_address', 'job_template_id', 'host_config_key']:
+ if p not in tower_conf:
+ module.fail_json(msg="Incomplete tower_callback configuration. tower_callback.{0} not set.".format(p))
+
+ if isinstance(tower_conf['job_template_id'], string_types):
+ tower_conf['job_template_id'] = urlparse.quote(tower_conf['job_template_id'])
+ tpl = string.Template(textwrap.dedent("""#!/bin/bash
+ set -x
+
+ retry_attempts=10
+ attempt=0
+ while [[ $attempt -lt $retry_attempts ]]
+ do
+ status_code=`curl --max-time 10 -v -k -s -i \
+ --data "host_config_key=${host_config_key}" \
+ 'https://${tower_address}/api/v2/job_templates/${template_id}/callback/' \
+ | head -n 1 \
+ | awk '{print $2}'`
+ if [[ $status_code == 404 ]]
+ then
+ status_code=`curl --max-time 10 -v -k -s -i \
+ --data "host_config_key=${host_config_key}" \
+ 'https://${tower_address}/api/v1/job_templates/${template_id}/callback/' \
+ | head -n 1 \
+ | awk '{print $2}'`
+ # fall back to using V1 API for Tower 3.1 and below, since v2 API will always 404
+ fi
+ if [[ $status_code == 201 ]]
+ then
+ exit 0
+ fi
+ attempt=$(( attempt + 1 ))
+ echo "$${status_code} received... retrying in 1 minute. (Attempt $${attempt})"
+ sleep 60
+ done
+ exit 1
+ """))
+ return tpl.safe_substitute(tower_address=tower_conf['tower_address'],
+ template_id=tower_conf['job_template_id'],
+ host_config_key=tower_conf['host_config_key'])
+ raise NotImplementedError("Only windows with remote-prep or non-windows with tower job callback supported so far.")
+
+
+def build_volume_spec(params):
+ volumes = params.get('volumes') or []
+ for volume in volumes:
+ if 'ebs' in volume:
+ for int_value in ['volume_size', 'iops']:
+ if int_value in volume['ebs']:
+ volume['ebs'][int_value] = int(volume['ebs'][int_value])
+ if 'volume_type' in volume['ebs'] and volume['ebs']['volume_type'] == 'gp3':
+ if not volume['ebs'].get('iops'):
+ volume['ebs']['iops'] = 3000
+ if 'throughput' in volume['ebs']:
+ module.require_botocore_at_least('1.19.27', reason='to set throughtput value')
+ volume['ebs']['throughput'] = int(volume['ebs']['throughput'])
+ else:
+ volume['ebs']['throughput'] = 125
+
+ return [snake_dict_to_camel_dict(v, capitalize_first=True) for v in volumes]
+
+
+def add_or_update_instance_profile(instance, desired_profile_name):
+ instance_profile_setting = instance.get('IamInstanceProfile')
+ if instance_profile_setting and desired_profile_name:
+ if desired_profile_name in (instance_profile_setting.get('Name'), instance_profile_setting.get('Arn')):
+ # great, the profile we asked for is what's there
+ return False
+ else:
+ desired_arn = determine_iam_role(desired_profile_name)
+ if instance_profile_setting.get('Arn') == desired_arn:
+ return False
+
+ # update association
+ try:
+ association = client.describe_iam_instance_profile_associations(
+ aws_retry=True,
+ Filters=[{'Name': 'instance-id', 'Values': [instance['InstanceId']]}])
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ # check for InvalidAssociationID.NotFound
+ module.fail_json_aws(e, "Could not find instance profile association")
+ try:
+ resp = client.replace_iam_instance_profile_association(
+ aws_retry=True,
+ AssociationId=association['IamInstanceProfileAssociations'][0]['AssociationId'],
+ IamInstanceProfile={'Arn': determine_iam_role(desired_profile_name)}
+ )
+ return True
+ except botocore.exceptions.ClientError as e:
+ module.fail_json_aws(e, "Could not associate instance profile")
+
+ if not instance_profile_setting and desired_profile_name:
+ # create association
+ try:
+ resp = client.associate_iam_instance_profile(
+ aws_retry=True,
+ IamInstanceProfile={'Arn': determine_iam_role(desired_profile_name)},
+ InstanceId=instance['InstanceId']
+ )
+ return True
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, "Could not associate new instance profile")
+
+ return False
+
+
+def build_network_spec(params):
+ """
+ Returns list of interfaces [complex]
+ Interface type: {
+ 'AssociatePublicIpAddress': True|False,
+ 'DeleteOnTermination': True|False,
+ 'Description': 'string',
+ 'DeviceIndex': 123,
+ 'Groups': [
+ 'string',
+ ],
+ 'Ipv6AddressCount': 123,
+ 'Ipv6Addresses': [
+ {
+ 'Ipv6Address': 'string'
+ },
+ ],
+ 'NetworkInterfaceId': 'string',
+ 'PrivateIpAddress': 'string',
+ 'PrivateIpAddresses': [
+ {
+ 'Primary': True|False,
+ 'PrivateIpAddress': 'string'
+ },
+ ],
+ 'SecondaryPrivateIpAddressCount': 123,
+ 'SubnetId': 'string'
+ },
+ """
+
+ interfaces = []
+ network = params.get('network') or {}
+ if not network.get('interfaces'):
+ # they only specified one interface
+ spec = {
+ 'DeviceIndex': 0,
+ }
+ if network.get('assign_public_ip') is not None:
+ spec['AssociatePublicIpAddress'] = network['assign_public_ip']
+
+ if params.get('vpc_subnet_id'):
+ spec['SubnetId'] = params['vpc_subnet_id']
+ else:
+ default_vpc = get_default_vpc()
+ if default_vpc is None:
+ module.fail_json(
+ msg="No default subnet could be found - you must include a VPC subnet ID (vpc_subnet_id parameter) to create an instance")
+ else:
+ sub = get_default_subnet(default_vpc)
+ spec['SubnetId'] = sub['SubnetId']
+
+ if network.get('private_ip_address'):
+ spec['PrivateIpAddress'] = network['private_ip_address']
+
+ if params.get('security_group') or params.get('security_groups'):
+ groups = discover_security_groups(
+ group=params.get('security_group'),
+ groups=params.get('security_groups'),
+ subnet_id=spec['SubnetId'],
+ )
+ spec['Groups'] = groups
+ if network.get('description') is not None:
+ spec['Description'] = network['description']
+ # TODO more special snowflake network things
+
+ return [spec]
+
+ # handle list of `network.interfaces` options
+ for idx, interface_params in enumerate(network.get('interfaces', [])):
+ spec = {
+ 'DeviceIndex': idx,
+ }
+
+ if isinstance(interface_params, string_types):
+ # naive case where user gave
+ # network_interfaces: [eni-1234, eni-4567, ....]
+ # put into normal data structure so we don't dupe code
+ interface_params = {'id': interface_params}
+
+ if interface_params.get('id') is not None:
+ # if an ID is provided, we don't want to set any other parameters.
+ spec['NetworkInterfaceId'] = interface_params['id']
+ interfaces.append(spec)
+ continue
+
+ spec['DeleteOnTermination'] = interface_params.get('delete_on_termination', True)
+
+ if interface_params.get('ipv6_addresses'):
+ spec['Ipv6Addresses'] = [{'Ipv6Address': a} for a in interface_params.get('ipv6_addresses', [])]
+
+ if interface_params.get('private_ip_address'):
+ spec['PrivateIpAddress'] = interface_params.get('private_ip_address')
+
+ if interface_params.get('description'):
+ spec['Description'] = interface_params.get('description')
+
+ if interface_params.get('subnet_id', params.get('vpc_subnet_id')):
+ spec['SubnetId'] = interface_params.get('subnet_id', params.get('vpc_subnet_id'))
+ elif not spec.get('SubnetId') and not interface_params['id']:
+ # TODO grab a subnet from default VPC
+ raise ValueError('Failed to assign subnet to interface {0}'.format(interface_params))
+
+ interfaces.append(spec)
+ return interfaces
+
+
+def warn_if_public_ip_assignment_changed(instance):
+ # This is a non-modifiable attribute.
+ assign_public_ip = (module.params.get('network') or {}).get('assign_public_ip')
+ if assign_public_ip is None:
+ return
+
+ # Check that public ip assignment is the same and warn if not
+ public_dns_name = instance.get('PublicDnsName')
+ if (public_dns_name and not assign_public_ip) or (assign_public_ip and not public_dns_name):
+ module.warn(
+ "Unable to modify public ip assignment to {0} for instance {1}. "
+ "Whether or not to assign a public IP is determined during instance creation.".format(
+ assign_public_ip, instance['InstanceId']))
+
+
+def warn_if_cpu_options_changed(instance):
+ # This is a non-modifiable attribute.
+ cpu_options = module.params.get('cpu_options')
+ if cpu_options is None:
+ return
+
+ # Check that the CpuOptions set are the same and warn if not
+ core_count_curr = instance['CpuOptions'].get('CoreCount')
+ core_count = cpu_options.get('core_count')
+ threads_per_core_curr = instance['CpuOptions'].get('ThreadsPerCore')
+ threads_per_core = cpu_options.get('threads_per_core')
+ if core_count_curr != core_count:
+ module.warn(
+ "Unable to modify core_count from {0} to {1}. "
+ "Assigning a number of core is determinted during instance creation".format(
+ core_count_curr, core_count))
+
+ if threads_per_core_curr != threads_per_core:
+ module.warn(
+ "Unable to modify threads_per_core from {0} to {1}. "
+ "Assigning a number of threads per core is determined during instance creation.".format(
+ threads_per_core_curr, threads_per_core))
+
+
+def discover_security_groups(group, groups, parent_vpc_id=None, subnet_id=None):
+
+ if subnet_id is not None:
+ try:
+ sub = client.describe_subnets(aws_retry=True, SubnetIds=[subnet_id])
+ except is_boto3_error_code('InvalidGroup.NotFound'):
+ module.fail_json(
+ "Could not find subnet {0} to associate security groups. Please check the vpc_subnet_id and security_groups parameters.".format(
+ subnet_id
+ )
+ )
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Error while searching for subnet {0} parent VPC.".format(subnet_id))
+ parent_vpc_id = sub['Subnets'][0]['VpcId']
+
+ if group:
+ return get_ec2_security_group_ids_from_names(group, client, vpc_id=parent_vpc_id)
+ if groups:
+ return get_ec2_security_group_ids_from_names(groups, client, vpc_id=parent_vpc_id)
+ return []
+
+
+def build_top_level_options(params):
+ spec = {}
+ if params.get('image_id'):
+ spec['ImageId'] = params['image_id']
+ elif isinstance(params.get('image'), dict):
+ image = params.get('image', {})
+ spec['ImageId'] = image.get('id')
+ if 'ramdisk' in image:
+ spec['RamdiskId'] = image['ramdisk']
+ if 'kernel' in image:
+ spec['KernelId'] = image['kernel']
+ if not spec.get('ImageId') and not params.get('launch_template'):
+ module.fail_json(msg="You must include an image_id or image.id parameter to create an instance, or use a launch_template.")
+
+ if params.get('key_name') is not None:
+ spec['KeyName'] = params.get('key_name')
+ if params.get('user_data') is not None:
+ spec['UserData'] = to_native(params.get('user_data'))
+ elif params.get('tower_callback') is not None:
+ spec['UserData'] = tower_callback_script(
+ tower_conf=params.get('tower_callback'),
+ windows=params.get('tower_callback').get('windows', False),
+ passwd=params.get('tower_callback').get('set_password'),
+ )
+
+ if params.get('launch_template') is not None:
+ spec['LaunchTemplate'] = {}
+ if not params.get('launch_template').get('id') or params.get('launch_template').get('name'):
+ module.fail_json(msg="Could not create instance with launch template. Either launch_template.name or launch_template.id parameters are required")
+
+ if params.get('launch_template').get('id') is not None:
+ spec['LaunchTemplate']['LaunchTemplateId'] = params.get('launch_template').get('id')
+ if params.get('launch_template').get('name') is not None:
+ spec['LaunchTemplate']['LaunchTemplateName'] = params.get('launch_template').get('name')
+ if params.get('launch_template').get('version') is not None:
+ spec['LaunchTemplate']['Version'] = to_native(params.get('launch_template').get('version'))
+
+ if params.get('detailed_monitoring', False):
+ spec['Monitoring'] = {'Enabled': True}
+ if params.get('cpu_credit_specification') is not None:
+ spec['CreditSpecification'] = {'CpuCredits': params.get('cpu_credit_specification')}
+ if params.get('tenancy') is not None:
+ spec['Placement'] = {'Tenancy': params.get('tenancy')}
+ if params.get('placement_group'):
+ if 'Placement' in spec:
+ spec['Placement']['GroupName'] = str(params.get('placement_group'))
+ else:
+ spec.setdefault('Placement', {'GroupName': str(params.get('placement_group'))})
+ if params.get('ebs_optimized') is not None:
+ spec['EbsOptimized'] = params.get('ebs_optimized')
+ if params.get('instance_initiated_shutdown_behavior'):
+ spec['InstanceInitiatedShutdownBehavior'] = params.get('instance_initiated_shutdown_behavior')
+ if params.get('termination_protection') is not None:
+ spec['DisableApiTermination'] = params.get('termination_protection')
+ if params.get('cpu_options') is not None:
+ spec['CpuOptions'] = {}
+ spec['CpuOptions']['ThreadsPerCore'] = params.get('cpu_options').get('threads_per_core')
+ spec['CpuOptions']['CoreCount'] = params.get('cpu_options').get('core_count')
+ if params.get('metadata_options'):
+ spec['MetadataOptions'] = {}
+ spec['MetadataOptions']['HttpEndpoint'] = params.get(
+ 'metadata_options').get('http_endpoint')
+ spec['MetadataOptions']['HttpTokens'] = params.get(
+ 'metadata_options').get('http_tokens')
+ return spec
+
+
+def build_instance_tags(params, propagate_tags_to_volumes=True):
+ tags = params.get('tags') or {}
+ if params.get('name') is not None:
+ tags['Name'] = params.get('name')
+ specs = boto3_tag_specifications(tags, ['volume', 'instance'])
+ return specs
+
+
+def build_run_instance_spec(params):
+
+ spec = dict(
+ ClientToken=uuid.uuid4().hex,
+ MaxCount=1,
+ MinCount=1,
+ )
+ # network parameters
+ spec['NetworkInterfaces'] = build_network_spec(params)
+ spec['BlockDeviceMappings'] = build_volume_spec(params)
+ spec.update(**build_top_level_options(params))
+ spec['TagSpecifications'] = build_instance_tags(params)
+
+ # IAM profile
+ if params.get('instance_role'):
+ spec['IamInstanceProfile'] = dict(Arn=determine_iam_role(params.get('instance_role')))
+
+ spec['InstanceType'] = params['instance_type']
+ return spec
+
+
+def await_instances(ids, desired_module_state='present', force_wait=False):
+ if not module.params.get('wait', True) and not force_wait:
+ # the user asked not to wait for anything
+ return
+
+ if module.check_mode:
+ # In check mode, there is no change even if you wait.
+ return
+
+ # Map ansible state to boto3 waiter type
+ state_to_boto3_waiter = {
+ 'present': 'instance_exists',
+ 'started': 'instance_status_ok',
+ 'running': 'instance_running',
+ 'stopped': 'instance_stopped',
+ 'restarted': 'instance_status_ok',
+ 'rebooted': 'instance_running',
+ 'terminated': 'instance_terminated',
+ 'absent': 'instance_terminated',
+ }
+ if desired_module_state not in state_to_boto3_waiter:
+ module.fail_json(msg="Cannot wait for state {0}, invalid state".format(desired_module_state))
+ boto3_waiter_type = state_to_boto3_waiter[desired_module_state]
+ waiter = client.get_waiter(boto3_waiter_type)
+ try:
+ waiter.wait(
+ InstanceIds=ids,
+ WaiterConfig={
+ 'Delay': 15,
+ 'MaxAttempts': module.params.get('wait_timeout', 600) // 15,
+ }
+ )
+ except botocore.exceptions.WaiterConfigError as e:
+ module.fail_json(msg="{0}. Error waiting for instances {1} to reach state {2}".format(
+ to_native(e), ', '.join(ids), boto3_waiter_type))
+ except botocore.exceptions.WaiterError as e:
+ module.warn("Instances {0} took too long to reach state {1}. {2}".format(
+ ', '.join(ids), boto3_waiter_type, to_native(e)))
+
+
+def diff_instance_and_params(instance, params, skip=None):
+ """boto3 instance obj, module params"""
+
+ if skip is None:
+ skip = []
+
+ changes_to_apply = []
+ id_ = instance['InstanceId']
+
+ ParamMapper = namedtuple('ParamMapper', ['param_key', 'instance_key', 'attribute_name', 'add_value'])
+
+ def value_wrapper(v):
+ return {'Value': v}
+
+ param_mappings = [
+ ParamMapper('ebs_optimized', 'EbsOptimized', 'ebsOptimized', value_wrapper),
+ ParamMapper('termination_protection', 'DisableApiTermination', 'disableApiTermination', value_wrapper),
+ # user data is an immutable property
+ # ParamMapper('user_data', 'UserData', 'userData', value_wrapper),
+ ]
+
+ for mapping in param_mappings:
+ if params.get(mapping.param_key) is None:
+ continue
+ if mapping.instance_key in skip:
+ continue
+
+ try:
+ value = client.describe_instance_attribute(aws_retry=True, Attribute=mapping.attribute_name, InstanceId=id_)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Could not describe attribute {0} for instance {1}".format(mapping.attribute_name, id_))
+ if value[mapping.instance_key]['Value'] != params.get(mapping.param_key):
+ arguments = dict(
+ InstanceId=instance['InstanceId'],
+ # Attribute=mapping.attribute_name,
+ )
+ arguments[mapping.instance_key] = mapping.add_value(params.get(mapping.param_key))
+ changes_to_apply.append(arguments)
+
+ if params.get('security_group') or params.get('security_groups'):
+ try:
+ value = client.describe_instance_attribute(aws_retry=True, Attribute="groupSet", InstanceId=id_)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Could not describe attribute groupSet for instance {0}".format(id_))
+ # managing security groups
+ if params.get('vpc_subnet_id'):
+ subnet_id = params.get('vpc_subnet_id')
+ else:
+ default_vpc = get_default_vpc()
+ if default_vpc is None:
+ module.fail_json(
+ msg="No default subnet could be found - you must include a VPC subnet ID (vpc_subnet_id parameter) to modify security groups.")
+ else:
+ sub = get_default_subnet(default_vpc)
+ subnet_id = sub['SubnetId']
+
+ groups = discover_security_groups(
+ group=params.get('security_group'),
+ groups=params.get('security_groups'),
+ subnet_id=subnet_id,
+ )
+ expected_groups = groups
+ instance_groups = [g['GroupId'] for g in value['Groups']]
+ if set(instance_groups) != set(expected_groups):
+ changes_to_apply.append(dict(
+ Groups=expected_groups,
+ InstanceId=instance['InstanceId']
+ ))
+
+ if (params.get('network') or {}).get('source_dest_check') is not None:
+ # network.source_dest_check is nested, so needs to be treated separately
+ check = bool(params.get('network').get('source_dest_check'))
+ if instance['SourceDestCheck'] != check:
+ changes_to_apply.append(dict(
+ InstanceId=instance['InstanceId'],
+ SourceDestCheck={'Value': check},
+ ))
+
+ return changes_to_apply
+
+
+def change_network_attachments(instance, params):
+ if (params.get('network') or {}).get('interfaces') is not None:
+ new_ids = []
+ for inty in params.get('network').get('interfaces'):
+ if isinstance(inty, dict) and 'id' in inty:
+ new_ids.append(inty['id'])
+ elif isinstance(inty, string_types):
+ new_ids.append(inty)
+ # network.interfaces can create the need to attach new interfaces
+ old_ids = [inty['NetworkInterfaceId'] for inty in instance['NetworkInterfaces']]
+ to_attach = set(new_ids) - set(old_ids)
+ for eni_id in to_attach:
+ try:
+ client.attach_network_interface(
+ aws_retry=True,
+ DeviceIndex=new_ids.index(eni_id),
+ InstanceId=instance['InstanceId'],
+ NetworkInterfaceId=eni_id,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Could not attach interface {0} to instance {1}".format(eni_id, instance['InstanceId']))
+ return bool(len(to_attach))
+ return False
+
+
+def find_instances(ids=None, filters=None):
+ paginator = client.get_paginator('describe_instances')
+ if ids:
+ params = dict(InstanceIds=ids)
+ elif filters is None:
+ module.fail_json(msg="No filters provided when they were required")
+ else:
+ for key in list(filters.keys()):
+ if not key.startswith("tag:"):
+ filters[key.replace("_", "-")] = filters.pop(key)
+ params = dict(Filters=ansible_dict_to_boto3_filter_list(filters))
+
+ try:
+ results = _describe_instances(**params)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Could not describe instances")
+ retval = list(results)
+ return retval
+
+
+@AWSRetry.jittered_backoff()
+def _describe_instances(**params):
+ paginator = client.get_paginator('describe_instances')
+ return paginator.paginate(**params).search('Reservations[].Instances[]')
+
+
+def get_default_vpc():
+ try:
+ vpcs = client.describe_vpcs(
+ aws_retry=True,
+ Filters=ansible_dict_to_boto3_filter_list({'isDefault': 'true'}))
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Could not describe default VPC")
+ if len(vpcs.get('Vpcs', [])):
+ return vpcs.get('Vpcs')[0]
+ return None
+
+
+def get_default_subnet(vpc, availability_zone=None):
+ try:
+ subnets = client.describe_subnets(
+ aws_retry=True,
+ Filters=ansible_dict_to_boto3_filter_list({
+ 'vpc-id': vpc['VpcId'],
+ 'state': 'available',
+ 'default-for-az': 'true',
+ })
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Could not describe default subnets for VPC {0}".format(vpc['VpcId']))
+ if len(subnets.get('Subnets', [])):
+ if availability_zone is not None:
+ subs_by_az = dict((subnet['AvailabilityZone'], subnet) for subnet in subnets.get('Subnets'))
+ if availability_zone in subs_by_az:
+ return subs_by_az[availability_zone]
+
+ # to have a deterministic sorting order, we sort by AZ so we'll always pick the `a` subnet first
+ # there can only be one default-for-az subnet per AZ, so the AZ key is always unique in this list
+ by_az = sorted(subnets.get('Subnets'), key=lambda s: s['AvailabilityZone'])
+ return by_az[0]
+ return None
+
+
+def ensure_instance_state(desired_module_state):
+ """
+ Sets return keys depending on the desired instance state
+ """
+ results = dict()
+ changed = False
+ if desired_module_state in ('running', 'started'):
+ _changed, failed, instances, failure_reason = change_instance_state(
+ filters=module.params.get('filters'), desired_module_state=desired_module_state)
+ changed |= bool(len(_changed))
+
+ if failed:
+ module.fail_json(
+ msg="Unable to start instances: {0}".format(failure_reason),
+ reboot_success=list(_changed),
+ reboot_failed=failed)
+
+ results = dict(
+ msg='Instances started',
+ start_success=list(_changed),
+ start_failed=[],
+ # Avoid breaking things 'reboot' is wrong but used to be returned
+ reboot_success=list(_changed),
+ reboot_failed=[],
+ changed=changed,
+ instances=[pretty_instance(i) for i in instances],
+ )
+ elif desired_module_state in ('restarted', 'rebooted'):
+ # https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html
+ # The Ansible behaviour of issuing a stop/start has a minor impact on user billing
+ # This will need to be changelogged if we ever change to client.reboot_instance
+ _changed, failed, instances, failure_reason = change_instance_state(
+ filters=module.params.get('filters'),
+ desired_module_state='stopped',
+ )
+ changed |= bool(len(_changed))
+ _changed, failed, instances, failure_reason = change_instance_state(
+ filters=module.params.get('filters'),
+ desired_module_state=desired_module_state,
+ )
+ changed |= bool(len(_changed))
+
+ if failed:
+ module.fail_json(
+ msg="Unable to restart instances: {0}".format(failure_reason),
+ reboot_success=list(_changed),
+ reboot_failed=failed)
+
+ results = dict(
+ msg='Instances restarted',
+ reboot_success=list(_changed),
+ changed=changed,
+ reboot_failed=[],
+ instances=[pretty_instance(i) for i in instances],
+ )
+ elif desired_module_state in ('stopped',):
+ _changed, failed, instances, failure_reason = change_instance_state(
+ filters=module.params.get('filters'),
+ desired_module_state=desired_module_state,
+ )
+ changed |= bool(len(_changed))
+
+ if failed:
+ module.fail_json(
+ msg="Unable to stop instances: {0}".format(failure_reason),
+ stop_success=list(_changed),
+ stop_failed=failed)
+
+ results = dict(
+ msg='Instances stopped',
+ stop_success=list(_changed),
+ changed=changed,
+ stop_failed=[],
+ instances=[pretty_instance(i) for i in instances],
+ )
+ elif desired_module_state in ('absent', 'terminated'):
+ terminated, terminate_failed, instances, failure_reason = change_instance_state(
+ filters=module.params.get('filters'),
+ desired_module_state=desired_module_state,
+ )
+
+ if terminate_failed:
+ module.fail_json(
+ msg="Unable to terminate instances: {0}".format(failure_reason),
+ terminate_success=list(terminated),
+ terminate_failed=terminate_failed)
+ results = dict(
+ msg='Instances terminated',
+ terminate_success=list(terminated),
+ changed=bool(len(terminated)),
+ terminate_failed=[],
+ instances=[pretty_instance(i) for i in instances],
+ )
+ return results
+
+
+def change_instance_state(filters, desired_module_state):
+
+ # Map ansible state to ec2 state
+ ec2_instance_states = {
+ 'present': 'running',
+ 'started': 'running',
+ 'running': 'running',
+ 'stopped': 'stopped',
+ 'restarted': 'running',
+ 'rebooted': 'running',
+ 'terminated': 'terminated',
+ 'absent': 'terminated',
+ }
+ desired_ec2_state = ec2_instance_states[desired_module_state]
+ changed = set()
+ instances = find_instances(filters=filters)
+ to_change = set(i['InstanceId'] for i in instances if i['State']['Name'] != desired_ec2_state)
+ unchanged = set()
+ failure_reason = ""
+
+ for inst in instances:
+ try:
+ if desired_ec2_state == 'terminated':
+ # Before terminating an instance we need for them to leave
+ # 'pending' or 'stopping' (if they're in those states)
+ if inst['State']['Name'] == 'stopping':
+ await_instances([inst['InstanceId']], desired_module_state='stopped', force_wait=True)
+ elif inst['State']['Name'] == 'pending':
+ await_instances([inst['InstanceId']], desired_module_state='running', force_wait=True)
+
+ if module.check_mode:
+ changed.add(inst['InstanceId'])
+ continue
+
+ # TODO use a client-token to prevent double-sends of these start/stop/terminate commands
+ # https://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html
+ resp = client.terminate_instances(aws_retry=True, InstanceIds=[inst['InstanceId']])
+ [changed.add(i['InstanceId']) for i in resp['TerminatingInstances']]
+ if desired_ec2_state == 'stopped':
+ # Before stopping an instance we need for them to leave
+ # 'pending'
+ if inst['State']['Name'] == 'pending':
+ await_instances([inst['InstanceId']], desired_module_state='running', force_wait=True)
+ # Already moving to the relevant state
+ elif inst['State']['Name'] in ('stopping', 'stopped'):
+ unchanged.add(inst['InstanceId'])
+ continue
+
+ if module.check_mode:
+ changed.add(inst['InstanceId'])
+ continue
+ resp = client.stop_instances(aws_retry=True, InstanceIds=[inst['InstanceId']])
+ [changed.add(i['InstanceId']) for i in resp['StoppingInstances']]
+ if desired_ec2_state == 'running':
+ if inst['State']['Name'] in ('pending', 'running'):
+ unchanged.add(inst['InstanceId'])
+ continue
+ elif inst['State']['Name'] == 'stopping':
+ await_instances([inst['InstanceId']], desired_module_state='stopped', force_wait=True)
+
+ if module.check_mode:
+ changed.add(inst['InstanceId'])
+ continue
+
+ resp = client.start_instances(aws_retry=True, InstanceIds=[inst['InstanceId']])
+ [changed.add(i['InstanceId']) for i in resp['StartingInstances']]
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ try:
+ failure_reason = to_native(e.message)
+ except AttributeError:
+ failure_reason = to_native(e)
+
+ if changed:
+ await_instances(ids=list(changed) + list(unchanged), desired_module_state=desired_module_state)
+
+ change_failed = list(to_change - changed)
+
+ if instances:
+ instances = find_instances(ids=list(i['InstanceId'] for i in instances))
+ return changed, change_failed, instances, failure_reason
+
+
+def pretty_instance(i):
+ instance = camel_dict_to_snake_dict(i, ignore_list=['Tags'])
+ instance['tags'] = boto3_tag_list_to_ansible_dict(i.get('Tags', {}))
+ return instance
+
+
+def determine_iam_role(name_or_arn):
+ if re.match(r'^arn:aws:iam::\d+:instance-profile/[\w+=/,.@-]+$', name_or_arn):
+ return name_or_arn
+ iam = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())
+ try:
+ role = iam.get_instance_profile(InstanceProfileName=name_or_arn, aws_retry=True)
+ return role['InstanceProfile']['Arn']
+ except is_boto3_error_code('NoSuchEntity') as e:
+ module.fail_json_aws(e, msg="Could not find instance_role {0}".format(name_or_arn))
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="An error occurred while searching for instance_role {0}. Please try supplying the full ARN.".format(name_or_arn))
+
+
+def handle_existing(existing_matches, state):
+ tags = dict(module.params.get('tags') or {})
+ name = module.params.get('name')
+ purge_tags = module.params.get('purge_tags', False)
+ if name:
+ tags['Name'] = name
+
+ changed = False
+ all_changes = list()
+
+ for instance in existing_matches:
+ changed |= ensure_ec2_tags(client, module, instance['InstanceId'], tags=tags, purge_tags=purge_tags)
+ changes = diff_instance_and_params(instance, module.params)
+ for c in changes:
+ if not module.check_mode:
+ try:
+ client.modify_instance_attribute(aws_retry=True, **c)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Could not apply change {0} to existing instance.".format(str(c)))
+ all_changes.extend(changes)
+ changed |= bool(changes)
+ changed |= add_or_update_instance_profile(existing_matches[0], module.params.get('instance_role'))
+ changed |= change_network_attachments(existing_matches[0], module.params)
+
+ altered = find_instances(ids=[i['InstanceId'] for i in existing_matches])
+ alter_config_result = dict(
+ changed=changed,
+ instances=[pretty_instance(i) for i in altered],
+ instance_ids=[i['InstanceId'] for i in altered],
+ changes=changes,
+ )
+
+ state_results = ensure_instance_state(state)
+
+ result = {**state_results, **alter_config_result}
+
+ return result
+
+
+def ensure_present(existing_matches, desired_module_state):
+ tags = dict(module.params.get('tags') or {})
+ name = module.params.get('name')
+ if name:
+ tags['Name'] = name
+
+ try:
+ instance_spec = build_run_instance_spec(module.params)
+ # If check mode is enabled,suspend 'ensure function'.
+ if module.check_mode:
+ module.exit_json(
+ changed=True,
+ spec=instance_spec,
+ )
+ instance_response = run_instances(**instance_spec)
+ instances = instance_response['Instances']
+ instance_ids = [i['InstanceId'] for i in instances]
+
+ # Wait for instances to exist in the EC2 API before
+ # attempting to modify them
+ await_instances(instance_ids, desired_module_state='present', force_wait=True)
+
+ for ins in instances:
+ # Wait for instances to exist (don't check state)
+ try:
+ AWSRetry.jittered_backoff(
+ catch_extra_error_codes=['InvalidInstanceID.NotFound'],
+ )(
+ client.describe_instance_status
+ )(
+ InstanceIds=[ins['InstanceId']],
+ IncludeAllInstances=True,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to fetch status of new EC2 instance")
+ changes = diff_instance_and_params(ins, module.params, skip=['UserData', 'EbsOptimized'])
+ for c in changes:
+ try:
+ client.modify_instance_attribute(aws_retry=True, **c)
+ except botocore.exceptions.ClientError as e:
+ module.fail_json_aws(e, msg="Could not apply change {0} to new instance.".format(str(c)))
+
+ if not module.params.get('wait'):
+ module.exit_json(
+ changed=True,
+ instance_ids=instance_ids,
+ spec=instance_spec,
+ )
+ await_instances(instance_ids, desired_module_state=desired_module_state)
+ instances = find_instances(ids=instance_ids)
+
+ module.exit_json(
+ changed=True,
+ instances=[pretty_instance(i) for i in instances],
+ instance_ids=instance_ids,
+ spec=instance_spec,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to create new EC2 instance")
+
+
+def run_instances(**instance_spec):
+ try:
+ return client.run_instances(**instance_spec)
+ except is_boto3_error_message('Invalid IAM Instance Profile ARN'):
+ # If the instance profile has just been created, it takes some time to be visible by ec2
+ # So we wait 10 second and retry the run_instances
+ time.sleep(10)
+ return client.run_instances(**instance_spec)
+
+
+def build_filters():
+ filters = {
+ # all states except shutting-down and terminated
+ 'instance-state-name': ['pending', 'running', 'stopping', 'stopped'],
+ }
+ if isinstance(module.params.get('instance_ids'), string_types):
+ filters['instance-id'] = [module.params.get('instance_ids')]
+ elif isinstance(module.params.get('instance_ids'), list) and len(module.params.get('instance_ids')):
+ filters['instance-id'] = module.params.get('instance_ids')
+ else:
+ if not module.params.get('vpc_subnet_id'):
+ if module.params.get('network'):
+ # grab AZ from one of the ENIs
+ ints = module.params.get('network').get('interfaces')
+ if ints:
+ filters['network-interface.network-interface-id'] = []
+ for i in ints:
+ if isinstance(i, dict):
+ i = i['id']
+ filters['network-interface.network-interface-id'].append(i)
+ else:
+ sub = get_default_subnet(get_default_vpc(), availability_zone=module.params.get('availability_zone'))
+ filters['subnet-id'] = sub['SubnetId']
+ else:
+ filters['subnet-id'] = [module.params.get('vpc_subnet_id')]
+
+ if module.params.get('name'):
+ filters['tag:Name'] = [module.params.get('name')]
+ elif module.params.get('tags'):
+ name_tag = module.params.get('tags').get('Name', None)
+ if name_tag:
+ filters['tag:Name'] = [name_tag]
+
+ if module.params.get('image_id'):
+ filters['image-id'] = [module.params.get('image_id')]
+ elif (module.params.get('image') or {}).get('id'):
+ filters['image-id'] = [module.params.get('image', {}).get('id')]
+ return filters
+
+
+def main():
+ global module
+ global client
+ argument_spec = dict(
+ state=dict(default='present', choices=['present', 'started', 'running', 'stopped', 'restarted', 'rebooted', 'terminated', 'absent']),
+ wait=dict(default=True, type='bool'),
+ wait_timeout=dict(default=600, type='int'),
+ # count=dict(default=1, type='int'),
+ image=dict(type='dict'),
+ image_id=dict(type='str'),
+ instance_type=dict(default='t2.micro', type='str'),
+ user_data=dict(type='str'),
+ tower_callback=dict(type='dict'),
+ ebs_optimized=dict(type='bool'),
+ vpc_subnet_id=dict(type='str', aliases=['subnet_id']),
+ availability_zone=dict(type='str'),
+ security_groups=dict(default=[], type='list', elements='str'),
+ security_group=dict(type='str'),
+ instance_role=dict(type='str'),
+ name=dict(type='str'),
+ tags=dict(type='dict'),
+ purge_tags=dict(type='bool', default=False),
+ filters=dict(type='dict', default=None),
+ launch_template=dict(type='dict'),
+ key_name=dict(type='str'),
+ cpu_credit_specification=dict(type='str', choices=['standard', 'unlimited']),
+ cpu_options=dict(type='dict', options=dict(
+ core_count=dict(type='int', required=True),
+ threads_per_core=dict(type='int', choices=[1, 2], required=True)
+ )),
+ tenancy=dict(type='str', choices=['dedicated', 'default']),
+ placement_group=dict(type='str'),
+ instance_initiated_shutdown_behavior=dict(type='str', choices=['stop', 'terminate']),
+ termination_protection=dict(type='bool'),
+ detailed_monitoring=dict(type='bool'),
+ instance_ids=dict(default=[], type='list', elements='str'),
+ network=dict(default=None, type='dict'),
+ volumes=dict(default=None, type='list', elements='dict'),
+ metadata_options=dict(type='dict', options=dict(
+ http_endpoint=dict(type='str', choices=['enabled', 'disabled'], default='enabled'),
+ http_tokens=dict(type='str', choices=['optional', 'required'], default='optional'))),
+ )
+ # running/present are synonyms
+ # as are terminated/absent
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ mutually_exclusive=[
+ ['security_groups', 'security_group'],
+ ['availability_zone', 'vpc_subnet_id'],
+ ['tower_callback', 'user_data'],
+ ['image_id', 'image'],
+ ],
+ supports_check_mode=True
+ )
+ result = dict()
+
+ if module.params.get('network'):
+ if module.params.get('network').get('interfaces'):
+ if module.params.get('security_group'):
+ module.fail_json(msg="Parameter network.interfaces can't be used with security_group")
+ if module.params.get('security_groups'):
+ module.fail_json(msg="Parameter network.interfaces can't be used with security_groups")
+
+ state = module.params.get('state')
+
+ retry_decorator = AWSRetry.jittered_backoff(
+ catch_extra_error_codes=[
+ 'IncorrectState',
+ ]
+ )
+ client = module.client('ec2', retry_decorator=retry_decorator)
+
+ if module.params.get('filters') is None:
+ module.params['filters'] = build_filters()
+
+ existing_matches = find_instances(filters=module.params.get('filters'))
+
+ if state in ('terminated', 'absent'):
+ if existing_matches:
+ result = ensure_instance_state(state)
+ else:
+ result = dict(
+ msg='No matching instances found',
+ changed=False,
+ )
+ elif existing_matches:
+ for match in existing_matches:
+ warn_if_public_ip_assignment_changed(match)
+ warn_if_cpu_options_changed(match)
+ result = handle_existing(existing_matches, state)
+ else:
+ result = ensure_present(existing_matches=existing_matches, desired_module_state=state)
+
+ module.exit_json(**result)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_key.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_key.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_key.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_key.py 2021-11-12 18:13:53.000000000 +0000
@@ -47,12 +47,22 @@
- This option has no effect since version 2.5 and will be removed after 2022-06-01.
type: int
required: false
+ tags:
+ description:
+ - A dictionary of tags to set on the key pair.
+ type: dict
+ version_added: 2.1.0
+ purge_tags:
+ description:
+ - Delete any tags not specified in I(tags).
+ default: false
+ type: bool
+ version_added: 2.1.0
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
-requirements: [ boto3 ]
author:
- "Vincent Viallet (@zbal)"
- "Prasad Katti (@prasadkatti)"
@@ -115,6 +125,16 @@
returned: when state is present
type: str
sample: my_keypair
+ id:
+ description: id of the keypair
+ returned: when state is present
+ type: str
+ sample: key-123456789abc
+ tags:
+ description: a dictionary representing the tags attached to the key pair
+ returned: when state is present
+ type: dict
+ sample: '{"my_key": "my value"}'
private_key:
description: private key of a newly created keypair
returned: when a new keypair is created by AWS (key_material is not provided)
@@ -136,14 +156,21 @@
from ..module_utils.core import AnsibleAWSModule
from ..module_utils.core import is_boto3_error_code
from ..module_utils.ec2 import AWSRetry
+from ..module_utils.ec2 import ensure_ec2_tags
+from ..module_utils.tagging import boto3_tag_specifications
+from ..module_utils.tagging import boto3_tag_list_to_ansible_dict
def extract_key_data(key):
data = {
'name': key['KeyName'],
- 'fingerprint': key['KeyFingerprint']
+ 'fingerprint': key['KeyFingerprint'],
+ 'id': key['KeyPairId'],
+ 'tags': {},
}
+ if 'Tags' in key:
+ data['tags'] = boto3_tag_list_to_ansible_dict(key['Tags'])
if 'KeyMaterial' in key:
data['private_key'] = key['KeyMaterial']
return data
@@ -163,7 +190,7 @@
random_name = "ansible-" + str(uuid.uuid4())
name_in_use = find_key_pair(module, ec2_client, random_name)
- temp_key = import_key_pair(module, ec2_client, random_name, key_material)
+ temp_key = _import_key_pair(module, ec2_client, random_name, key_material)
delete_key_pair(module, ec2_client, random_name, finish_task=False)
return temp_key['KeyFingerprint']
@@ -183,40 +210,54 @@
def create_key_pair(module, ec2_client, name, key_material, force):
+ tags = module.params.get('tags')
+ purge_tags = module.params.get('purge_tags')
key = find_key_pair(module, ec2_client, name)
+ tag_spec = boto3_tag_specifications(tags, ['key-pair'])
+ changed = False
if key:
if key_material and force:
- if not module.check_mode:
- new_fingerprint = get_key_fingerprint(module, ec2_client, key_material)
- if key['KeyFingerprint'] != new_fingerprint:
+ new_fingerprint = get_key_fingerprint(module, ec2_client, key_material)
+ if key['KeyFingerprint'] != new_fingerprint:
+ changed = True
+ if not module.check_mode:
delete_key_pair(module, ec2_client, name, finish_task=False)
- key = import_key_pair(module, ec2_client, name, key_material)
- key_data = extract_key_data(key)
- module.exit_json(changed=True, key=key_data, msg="key pair updated")
- else:
- # Assume a change will be made in check mode since a comparison can't be done
- module.exit_json(changed=True, key=extract_key_data(key), msg="key pair updated")
+ key = _import_key_pair(module, ec2_client, name, key_material, tag_spec)
+ key_data = extract_key_data(key)
+ module.exit_json(changed=True, key=key_data, msg="key pair updated")
+ changed |= ensure_ec2_tags(ec2_client, module, key['KeyPairId'], tags=tags, purge_tags=purge_tags)
+ key = find_key_pair(module, ec2_client, name)
key_data = extract_key_data(key)
- module.exit_json(changed=False, key=key_data, msg="key pair already exists")
+ module.exit_json(changed=changed, key=key_data, msg="key pair already exists")
else:
# key doesn't exist, create it now
key_data = None
if not module.check_mode:
if key_material:
- key = import_key_pair(module, ec2_client, name, key_material)
+ key = _import_key_pair(module, ec2_client, name, key_material, tag_spec)
else:
- try:
- key = ec2_client.create_key_pair(aws_retry=True, KeyName=name)
- except botocore.exceptions.ClientError as err:
- module.fail_json_aws(err, msg="error creating key")
+ key = _create_key_pair(module, ec2_client, name, tag_spec)
key_data = extract_key_data(key)
module.exit_json(changed=True, key=key_data, msg="key pair created")
-def import_key_pair(module, ec2_client, name, key_material):
+def _create_key_pair(module, ec2_client, name, tag_spec):
+ params = dict(KeyName=name)
+ if tag_spec:
+ params['TagSpecifications'] = tag_spec
+ try:
+ key = ec2_client.create_key_pair(aws_retry=True, **params)
+ except botocore.exceptions.ClientError as err:
+ module.fail_json_aws(err, msg="error creating key")
+ return key
+
+def _import_key_pair(module, ec2_client, name, key_material, tag_spec=None):
+ params = dict(KeyName=name, PublicKeyMaterial=to_bytes(key_material))
+ if tag_spec:
+ params['TagSpecifications'] = tag_spec
try:
- key = ec2_client.import_key_pair(aws_retry=True, KeyName=name, PublicKeyMaterial=to_bytes(key_material))
+ key = ec2_client.import_key_pair(aws_retry=True, **params)
except botocore.exceptions.ClientError as err:
module.fail_json_aws(err, msg="error importing key")
return key
@@ -244,6 +285,8 @@
key_material=dict(no_log=False),
force=dict(type='bool', default=True),
state=dict(default='present', choices=['present', 'absent']),
+ tags=dict(type='dict'),
+ purge_tags=dict(type='bool', default=False),
wait=dict(type='bool', removed_at_date='2022-06-01', removed_from_collection='amazon.aws'),
wait_timeout=dict(type='int', removed_at_date='2022-06-01', removed_from_collection='amazon.aws')
)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_metadata_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_metadata_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_metadata_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_metadata_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -119,7 +119,7 @@
ansible_ec2_iam_info_instanceprofilearn:
description: The IAM instance profile ARN.
type: str
- sample: "arn:aws:iam:::instance-profile/"
+ sample: "arn:aws:iam:::instance-profile/role_name"
ansible_ec2_iam_info_instanceprofileid:
description: IAM instance profile ID.
type: str
@@ -132,37 +132,37 @@
description: IAM instance role.
type: str
sample: "role_name"
- ansible_ec2_iam_security_credentials_:
+ ansible_ec2_iam_security_credentials_role_name:
description:
- If there is an IAM role associated with the instance, role-name is the name of the role,
and role-name contains the temporary security credentials associated with the role. Otherwise, not present.
type: str
sample: ""
- ansible_ec2_iam_security_credentials__accesskeyid:
+ ansible_ec2_iam_security_credentials_role_name_accesskeyid:
description: IAM role access key ID.
type: str
sample: ""
- ansible_ec2_iam_security_credentials__code:
+ ansible_ec2_iam_security_credentials_role_name_code:
description: IAM code.
type: str
sample: "Success"
- ansible_ec2_iam_security_credentials__expiration:
+ ansible_ec2_iam_security_credentials_role_name_expiration:
description: IAM role credentials expiration time.
type: str
sample: "2017-05-12T09:11:41Z"
- ansible_ec2_iam_security_credentials__lastupdated:
+ ansible_ec2_iam_security_credentials_role_name_lastupdated:
description: IAM role last updated time.
type: str
sample: "2017-05-12T02:40:44Z"
- ansible_ec2_iam_security_credentials__secretaccesskey:
+ ansible_ec2_iam_security_credentials_role_name_secretaccesskey:
description: IAM role secret access key.
type: str
sample: ""
- ansible_ec2_iam_security_credentials__token:
+ ansible_ec2_iam_security_credentials_role_name_token:
description: IAM role token.
type: str
sample: ""
- ansible_ec2_iam_security_credentials__type:
+ ansible_ec2_iam_security_credentials_role_name_type:
description: IAM role type.
type: str
sample: "AWS-HMAC"
@@ -278,87 +278,87 @@
description: Metrics; no longer available.
type: str
sample: ""
- ansible_ec2_network_interfaces_macs__device_number:
+ ansible_ec2_network_interfaces_macs_mac_address_device_number:
description:
- The unique device number associated with that interface. The device number corresponds to the device name;
for example, a device-number of 2 is for the eth2 device.
- This category corresponds to the DeviceIndex and device-index fields that are used by the Amazon EC2 API and the EC2 commands for the AWS CLI.
type: str
sample: "0"
- ansible_ec2_network_interfaces_macs__interface_id:
+ ansible_ec2_network_interfaces_macs_mac_address_interface_id:
description: The elastic network interface ID.
type: str
sample: "eni-12345678"
- ansible_ec2_network_interfaces_macs__ipv4_associations_:
+ ansible_ec2_network_interfaces_macs_mac_address_ipv4_associations_ip_address:
description: The private IPv4 addresses that are associated with each public-ip address and assigned to that interface.
type: str
sample: ""
- ansible_ec2_network_interfaces_macs__ipv6s:
+ ansible_ec2_network_interfaces_macs_mac_address_ipv6s:
description: The IPv6 addresses associated with the interface. Returned only for instances launched into a VPC.
type: str
sample: ""
- ansible_ec2_network_interfaces_macs__local_hostname:
+ ansible_ec2_network_interfaces_macs_mac_address_local_hostname:
description: The interface's local hostname.
type: str
sample: ""
- ansible_ec2_network_interfaces_macs__local_ipv4s:
+ ansible_ec2_network_interfaces_macs_mac_address_local_ipv4s:
description: The private IPv4 addresses associated with the interface.
type: str
sample: ""
- ansible_ec2_network_interfaces_macs__mac:
+ ansible_ec2_network_interfaces_macs_mac_address_mac:
description: The instance's MAC address.
type: str
sample: "00:11:22:33:44:55"
- ansible_ec2_network_interfaces_macs__owner_id:
+ ansible_ec2_network_interfaces_macs_mac_address_owner_id:
description:
- The ID of the owner of the network interface.
- In multiple-interface environments, an interface can be attached by a third party, such as Elastic Load Balancing.
- Traffic on an interface is always billed to the interface owner.
type: str
sample: "01234567890"
- ansible_ec2_network_interfaces_macs__public_hostname:
+ ansible_ec2_network_interfaces_macs_mac_address_public_hostname:
description:
- The interface's public DNS (IPv4). If the instance is in a VPC,
this category is only returned if the enableDnsHostnames attribute is set to true.
type: str
sample: "ec2-1-2-3-4.compute-1.amazonaws.com"
- ansible_ec2_network_interfaces_macs__public_ipv4s:
+ ansible_ec2_network_interfaces_macs_mac_address_public_ipv4s:
description: The Elastic IP addresses associated with the interface. There may be multiple IPv4 addresses on an instance.
type: str
sample: "1.2.3.4"
- ansible_ec2_network_interfaces_macs__security_group_ids:
+ ansible_ec2_network_interfaces_macs_mac_address_security_group_ids:
description: The IDs of the security groups to which the network interface belongs. Returned only for instances launched into a VPC.
type: str
sample: "sg-01234567,sg-01234568"
- ansible_ec2_network_interfaces_macs__security_groups:
+ ansible_ec2_network_interfaces_macs_mac_address_security_groups:
description: Security groups to which the network interface belongs. Returned only for instances launched into a VPC.
type: str
sample: "secgroup1,secgroup2"
- ansible_ec2_network_interfaces_macs__subnet_id:
+ ansible_ec2_network_interfaces_macs_mac_address_subnet_id:
description: The ID of the subnet in which the interface resides. Returned only for instances launched into a VPC.
type: str
sample: "subnet-01234567"
- ansible_ec2_network_interfaces_macs__subnet_ipv4_cidr_block:
+ ansible_ec2_network_interfaces_macs_mac_address_subnet_ipv4_cidr_block:
description: The IPv4 CIDR block of the subnet in which the interface resides. Returned only for instances launched into a VPC.
type: str
sample: "10.0.1.0/24"
- ansible_ec2_network_interfaces_macs__subnet_ipv6_cidr_blocks:
+ ansible_ec2_network_interfaces_macs_mac_address_subnet_ipv6_cidr_blocks:
description: The IPv6 CIDR block of the subnet in which the interface resides. Returned only for instances launched into a VPC.
type: str
sample: ""
- ansible_ec2_network_interfaces_macs__vpc_id:
+ ansible_ec2_network_interfaces_macs_mac_address_vpc_id:
description: The ID of the VPC in which the interface resides. Returned only for instances launched into a VPC.
type: str
sample: "vpc-0123456"
- ansible_ec2_network_interfaces_macs__vpc_ipv4_cidr_block:
+ ansible_ec2_network_interfaces_macs_mac_address_vpc_ipv4_cidr_block:
description: The IPv4 CIDR block of the VPC in which the interface resides. Returned only for instances launched into a VPC.
type: str
sample: "10.0.0.0/16"
- ansible_ec2_network_interfaces_macs__vpc_ipv4_cidr_blocks:
+ ansible_ec2_network_interfaces_macs_mac_address_vpc_ipv4_cidr_blocks:
description: The IPv4 CIDR block of the VPC in which the interface resides. Returned only for instances launched into a VPC.
type: str
sample: "10.0.0.0/16"
- ansible_ec2_network_interfaces_macs__vpc_ipv6_cidr_blocks:
+ ansible_ec2_network_interfaces_macs_mac_address_vpc_ipv6_cidr_blocks:
description: The IPv6 CIDR block of the VPC in which the interface resides. Returned only for instances launched into a VPC.
type: str
sample: ""
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2.py 2021-11-12 18:13:53.000000000 +0000
@@ -11,6 +11,10 @@
module: ec2
version_added: 1.0.0
short_description: create, terminate, start or stop an instance in ec2
+deprecated:
+ removed_in: 4.0.0
+ why: The ec2 module is based upon a deprecated version of the AWS SDK.
+ alternative: Use M(amazon.aws.ec2_instance).
description:
- Creates or terminates ec2 instances.
- >
@@ -170,7 +174,7 @@
type: str
state:
description:
- - Create, terminate, start, stop or restart instances. The state 'restarted' was added in Ansible 2.2.
+ - Create, terminate, start, stop or restart instances.
- When I(state=absent), I(instance_ids) is required.
- When I(state=running), I(state=stopped) or I(state=restarted) then either I(instance_ids) or I(instance_tags) is required.
default: 'present'
@@ -257,6 +261,9 @@
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
+requirements:
+- python >= 2.6
+- boto
'''
@@ -577,6 +584,297 @@
'''
+RETURN = r'''
+changed:
+ description: If the EC2 instance has changed.
+ type: bool
+ returned: always
+ sample: true
+instances:
+ description: The instances.
+ type: list
+ returned: always
+ contains:
+ ami_launch_index:
+ description: The AMI launch index, which can be used to find this instance in the launch group.
+ type: int
+ returned: always
+ sample: 0
+ architecture:
+ description: The architecture of the image.
+ type: str
+ returned: always
+ sample: "x86_64"
+ block_device_mapping:
+ description: Any block device mapping entries for the instance.
+ type: dict
+ returned: always
+ sample: {
+ "/dev/xvda": {
+ "delete_on_termination": true,
+ "status": "attached",
+ "volume_id": "vol-06d364586f5550b62"
+ }
+ }
+ dns_name:
+ description: The public DNS name assigned to the instance.
+ type: str
+ returned: always
+ sample: "ec2-203-0-113-1.z-2.compute-1.amazonaws.com"
+ ebs_optimized:
+ description: Indicates whether the instance is optimized for Amazon EBS I/O.
+ type: bool
+ returned: always
+ sample: false
+ groups:
+ description: One or more security groups.
+ type: dict
+ returned: always
+ sample: {
+ "sg-0c6562ab3d435619f": "ansible-test--88312190_setup"
+ }
+ hypervisor:
+ description: The hypervisor type of the instance.
+ type: str
+ returned: always
+ sample: "xen"
+ image_id:
+ description: The ID of the AMI used to launch the instance.
+ type: str
+ returned: always
+ sample: "ami-0d5eff06f840b45e9"
+ instance_id:
+ description: The ID of the instance.
+ type: str
+ returned: always
+ sample: "i-0250719204c428be1"
+ instance_type:
+ description: The instance type.
+ type: str
+ returned: always
+ sample: "t2.micro"
+ kernel:
+ description: The kernel associated with this instance, if applicable.
+ type: str
+ returned: always
+ sample: ""
+ key_name:
+ description: The name of the key pair, if this instance was launched with an associated key pair.
+ type: str
+ returned: always
+ sample: "ansible-test-88312190_setup"
+ launch_time:
+ description: The time the instance was launched.
+ type: str
+ returned: always
+ sample: "2021-05-09T19:30:26.000Z"
+ placement:
+ description: The location where the instance launched, if applicable.
+ type: dict
+ returned: always
+ sample: {
+ "availability_zone": "us-east-1a",
+ "group_name": "",
+ "tenancy": "default"
+ }
+ private_dns_name:
+ description: The private DNS hostname name assigned to the instance.
+ type: str
+ returned: always
+ sample: "ip-10-176-1-249.ec2.internal"
+ private_ip:
+ description: The private IPv4 address assigned to the instance.
+ type: str
+ returned: always
+ sample: "10.176.1.249"
+ public_dns_name:
+ description: The public DNS name assigned to the instance.
+ type: str
+ returned: always
+ sample: "ec2-203-0-113-1.z-2.compute-1.amazonaws.com"
+ public_ip:
+ description: The public IPv4 address, or the Carrier IP address assigned to the instance, if applicable.
+ type: str
+ returned: always
+ sample: "203.0.113.1"
+ ramdisk:
+ description: The RAM disk associated with this instance, if applicable.
+ type: str
+ returned: always
+ sample: ""
+ root_device_name:
+ description: The device name of the root device volume.
+ type: str
+ returned: always
+ sample: "/dev/xvda"
+ root_device_type:
+ description: The root device type used by the AMI.
+ type: str
+ returned: always
+ sample: "ebs"
+ state:
+ description: The current state of the instance.
+ type: dict
+ returned: always
+ sample: {
+ "code": 80,
+ "name": "stopped"
+ }
+ tags:
+ description: Any tags assigned to the instance.
+ type: dict
+ returned: always
+ sample: {
+ "ResourcePrefix": "ansible-test-88312190-integration_tests"
+ }
+ tenancy:
+ description: The tenancy of the instance (if the instance is running in a VPC).
+ type: str
+ returned: always
+ sample: "default"
+ virtualization_type:
+ description: The virtualization type of the instance.
+ type: str
+ returned: always
+ sample: "hvm"
+ monitoring:
+ description: The monitoring for the instance.
+ type: dict
+ returned: always
+ sample: {
+ "state": "disabled"
+ }
+ capacity_reservation_specification:
+ description: Information about the Capacity Reservation targeting option.
+ type: dict
+ returned: always
+ sample: {
+ "capacity_reservation_preference": "open"
+ }
+ client_token:
+ description: The idempotency token you provided when you launched the instance, if applicable.
+ type: str
+ returned: always
+ sample: ""
+ cpu_options:
+ description: The CPU options for the instance.
+ type: dict
+ returned: always
+ sample: {
+ "core_count": 1,
+ "threads_per_core": 1
+ }
+ ena_support:
+ description: Specifies whether enhanced networking with ENA is enabled.
+ type: bool
+ returned: always
+ sample: true
+ enclave_options:
+ description: Indicates whether the instance is enabled for AWS Nitro Enclaves.
+ type: dict
+ returned: always
+ sample: {
+ "enabled": false
+ }
+ hibernation_options:
+ description: Indicates whether the instance is enabled for hibernation.
+ type: dict
+ returned: always
+ sample: {
+ "configured": false
+ }
+ network_interfaces:
+ description: The network interfaces for the instance.
+ type: list
+ returned: always
+ sample: [
+ {
+ "attachment": {
+ "attach_time": "2021-05-09T19:30:57+00:00",
+ "attachment_id": "eni-attach-07341f2560be6c8fc",
+ "delete_on_termination": true,
+ "device_index": 0,
+ "network_card_index": 0,
+ "status": "attached"
+ },
+ "description": "",
+ "groups": [
+ {
+ "group_id": "sg-0c6562ab3d435619f",
+ "group_name": "ansible-test-88312190_setup"
+ }
+ ],
+ "interface_type": "interface",
+ "ipv6_addresses": [],
+ "mac_address": "0e:0e:36:60:67:cf",
+ "network_interface_id": "eni-061dee20eba3b445a",
+ "owner_id": "721066863947",
+ "private_dns_name": "ip-10-176-1-178.ec2.internal",
+ "private_ip_address": "10.176.1.178",
+ "private_ip_addresses": [
+ {
+ "primary": true,
+ "private_dns_name": "ip-10-176-1-178.ec2.internal",
+ "private_ip_address": "10.176.1.178"
+ }
+ ],
+ "source_dest_check": true,
+ "status": "in-use",
+ "subnet_id": "subnet-069d3e2eab081955d",
+ "vpc_id": "vpc-0b6879b6ca2e9be2b"
+ }
+ ]
+ vpc_id:
+ description: The ID of the VPC in which the instance is running.
+ type: str
+ returned: always
+ sample: "vpc-0b6879b6ca2e9be2b"
+ subnet_id:
+ description: The ID of the subnet in which the instance is running.
+ type: str
+ returned: always
+ sample: "subnet-069d3e2eab081955d"
+ state_transition_reason:
+ description: The reason for the most recent state transition. This might be an empty string.
+ type: str
+ returned: always
+ sample: "User initiated (2021-05-09 19:31:28 GMT)"
+ state_reason:
+ description: The reason for the most recent state transition.
+ type: dict
+ returned: always
+ sample: {
+ "code": "Client.UserInitiatedShutdown",
+ "message": "Client.UserInitiatedShutdown: User initiated shutdown"
+ }
+ security_groups:
+ description: The security groups for the instance.
+ type: list
+ returned: always
+ sample: [
+ {
+ "group_id": "sg-0c6562ab3d435619f",
+ "group_name": "ansible-test-alinas-mbp-88312190_setup"
+ }
+ ]
+ source_dest_check:
+ description: Indicates whether source/destination checking is enabled.
+ type: bool
+ returned: always
+ sample: true
+ metadata:
+ description: The metadata options for the instance.
+ type: dict
+ returned: always
+ sample: {
+ "http_endpoint": "enabled",
+ "http_put_response_hop_limit": 1,
+ "http_tokens": "optional",
+ "state": "applied"
+ }
+'''
+
+
import time
import datetime
from ast import literal_eval
@@ -1665,6 +1963,9 @@
],
)
+ module.deprecate("The 'ec2' module has been deprecated and replaced by the 'ec2_instance' module'",
+ version='4.0.0', collection_name='amazon.aws')
+
if module.params.get('group') and module.params.get('group_id'):
module.deprecate(
msg='Support for passing both group and group_id has been deprecated. '
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about ec2 volume snapshots in AWS.
- This module was called C(ec2_snapshot_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author:
- "Rob White (@wimnat)"
- Aubin Bikouo (@abikouo)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about ec2 volume snapshots in AWS.
- This module was called C(ec2_snapshot_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author:
- "Rob White (@wimnat)"
- Aubin Bikouo (@abikouo)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_snapshot.py 2021-11-12 18:13:53.000000000 +0000
@@ -37,6 +37,8 @@
snapshot_tags:
description:
- A dictionary of tags to add to the snapshot.
+ - If the volume has a C(Name) tag this will be automatically added to the
+ snapshot.
type: dict
required: false
wait:
@@ -48,9 +50,8 @@
wait_timeout:
description:
- How long before wait gives up, in seconds.
- - Specify 0 to wait forever.
required: false
- default: 0
+ default: 600
type: int
state:
description:
@@ -70,12 +71,10 @@
required: false
default: 0
type: int
-
author: "Will Thames (@willthames)"
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
-
'''
EXAMPLES = '''
@@ -132,22 +131,22 @@
sample: 8
'''
-import time
import datetime
try:
- import boto.exception
+ import botocore
except ImportError:
- pass # Taken care of by ec2.HAS_BOTO
-
-from ..module_utils.core import AnsibleAWSModule
-from ..module_utils.ec2 import HAS_BOTO
-from ..module_utils.ec2 import ec2_connect
+ pass # Taken care of by AnsibleAWSModule
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
-# Find the most recent snapshot
-def _get_snapshot_starttime(snap):
- return datetime.datetime.strptime(snap.start_time, '%Y-%m-%dT%H:%M:%S.%fZ')
+from ..module_utils.core import AnsibleAWSModule
+from ..module_utils.core import is_boto3_error_code
+from ..module_utils.ec2 import AWSRetry
+from ..module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
+from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ..module_utils.waiters import get_waiter
def _get_most_recent_snapshot(snapshots, max_snapshot_age_secs=None, now=None):
@@ -163,12 +162,10 @@
return None
if not now:
- now = datetime.datetime.utcnow()
-
- youngest_snapshot = max(snapshots, key=_get_snapshot_starttime)
+ now = datetime.datetime.now(datetime.timezone.utc)
- # See if the snapshot is younger that the given max age
- snapshot_start = datetime.datetime.strptime(youngest_snapshot.start_time, '%Y-%m-%dT%H:%M:%S.%fZ')
+ youngest_snapshot = max(snapshots, key=lambda s: s['StartTime'])
+ snapshot_start = youngest_snapshot['StartTime']
snapshot_age = now - snapshot_start
if max_snapshot_age_secs is not None:
@@ -178,92 +175,168 @@
return youngest_snapshot
-def _create_with_wait(snapshot, wait_timeout_secs, sleep_func=time.sleep):
- """
- Wait for the snapshot to be created
- :param snapshot:
- :param wait_timeout_secs: fail this step after this many seconds
- :param sleep_func:
- :return:
- """
- time_waited = 0
- snapshot.update()
- while snapshot.status != 'completed':
- sleep_func(3)
- snapshot.update()
- time_waited += 3
- if wait_timeout_secs and time_waited > wait_timeout_secs:
- return False
- return True
+def get_volume_by_instance(module, ec2, device_name, instance_id):
+ try:
+ _filter = {
+ 'attachment.instance-id': instance_id,
+ 'attachment.device': device_name
+ }
+ volumes = ec2.describe_volumes(
+ aws_retry=True,
+ Filters=ansible_dict_to_boto3_filter_list(_filter)
+ )['Volumes']
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to describe Volume")
+
+ if not volumes:
+ module.fail_json(
+ msg="Could not find volume with name {0} attached to instance {1}".format(
+ device_name, instance_id
+ )
+ )
+
+ volume = volumes[0]
+ return volume
+
+
+def get_volume_by_id(module, ec2, volume):
+ try:
+ volumes = ec2.describe_volumes(
+ aws_retry=True,
+ VolumeIds=[volume],
+ )['Volumes']
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to describe Volume")
+
+ if not volumes:
+ module.fail_json(
+ msg="Could not find volume with id {0}".format(volume)
+ )
+
+ volume = volumes[0]
+ return volume
+
+
+@AWSRetry.jittered_backoff()
+def _describe_snapshots(ec2, **params):
+ paginator = ec2.get_paginator('describe_snapshots')
+ return paginator.paginate(**params).build_full_result()
+
+
+# Handle SnapshotCreationPerVolumeRateExceeded separately because we need a much
+# longer delay than normal
+@AWSRetry.jittered_backoff(catch_extra_error_codes=['SnapshotCreationPerVolumeRateExceeded'], delay=15)
+def _create_snapshot(ec2, **params):
+ # Fast retry on common failures ('global' rate limits)
+ return ec2.create_snapshot(aws_retry=True, **params)
-def create_snapshot(module, ec2, state=None, description=None, wait=None,
+def get_snapshots_by_volume(module, ec2, volume_id):
+ _filter = {'volume-id': volume_id}
+ try:
+ results = _describe_snapshots(
+ ec2,
+ Filters=ansible_dict_to_boto3_filter_list(_filter)
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to describe snapshots from volume")
+
+ return results['Snapshots']
+
+
+def create_snapshot(module, ec2, description=None, wait=None,
wait_timeout=None, volume_id=None, instance_id=None,
snapshot_id=None, device_name=None, snapshot_tags=None,
last_snapshot_min_age=None):
snapshot = None
changed = False
- required = [volume_id, snapshot_id, instance_id]
- if required.count(None) != len(required) - 1: # only 1 must be set
- module.fail_json(msg='One and only one of volume_id or instance_id or snapshot_id must be specified')
- if instance_id and not device_name or device_name and not instance_id:
- module.fail_json(msg='Instance ID and device name must both be specified')
-
if instance_id:
- try:
- volumes = ec2.get_all_volumes(filters={'attachment.instance-id': instance_id, 'attachment.device': device_name})
- except boto.exception.BotoServerError as e:
- module.fail_json_aws(e)
-
- if not volumes:
- module.fail_json(msg="Could not find volume with name %s attached to instance %s" % (device_name, instance_id))
-
- volume_id = volumes[0].id
+ volume = get_volume_by_instance(
+ module, ec2, device_name, instance_id
+ )
+ volume_id = volume['VolumeId']
+ else:
+ volume = get_volume_by_id(module, ec2, volume_id)
+ if 'Tags' not in volume:
+ volume['Tags'] = {}
+ if last_snapshot_min_age > 0:
+ current_snapshots = get_snapshots_by_volume(module, ec2, volume_id)
+ last_snapshot_min_age = last_snapshot_min_age * 60 # Convert to seconds
+ snapshot = _get_most_recent_snapshot(
+ current_snapshots,
+ max_snapshot_age_secs=last_snapshot_min_age
+ )
+ # Create a new snapshot if we didn't find an existing one to use
+ if snapshot is None:
+ volume_tags = boto3_tag_list_to_ansible_dict(volume['Tags'])
+ volume_name = volume_tags.get('Name')
+ _tags = dict()
+ if volume_name:
+ _tags['Name'] = volume_name
+ if snapshot_tags:
+ _tags.update(snapshot_tags)
- if state == 'absent':
- if not snapshot_id:
- module.fail_json(msg='snapshot_id must be set when state is absent')
+ params = {'VolumeId': volume_id}
+ if description:
+ params['Description'] = description
+ if _tags:
+ params['TagSpecifications'] = [{
+ 'ResourceType': 'snapshot',
+ 'Tags': ansible_dict_to_boto3_tag_list(_tags),
+ }]
+ try:
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have created a snapshot if not in check mode',
+ volume_id=volume['VolumeId'], volume_size=volume['Size'])
+ snapshot = _create_snapshot(ec2, **params)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to create snapshot")
+ changed = True
+ if wait:
+ waiter = get_waiter(ec2, 'snapshot_completed')
try:
- ec2.delete_snapshot(snapshot_id)
- except boto.exception.BotoServerError as e:
- # exception is raised if snapshot does not exist
- if e.error_code == 'InvalidSnapshot.NotFound':
- module.exit_json(changed=False)
- else:
- module.fail_json_aws(e)
+ waiter.wait(
+ SnapshotIds=[snapshot['SnapshotId']],
+ WaiterConfig=dict(Delay=3, MaxAttempts=wait_timeout // 3)
+ )
+ except botocore.exceptions.WaiterError as e:
+ module.fail_json_aws(e, msg='Timed out while creating snapshot')
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(
+ e, msg='Error while waiting for snapshot creation'
+ )
+
+ _tags = boto3_tag_list_to_ansible_dict(snapshot['Tags'])
+ _snapshot = camel_dict_to_snake_dict(snapshot)
+ _snapshot['tags'] = _tags
+ results = {
+ 'snapshot_id': snapshot['SnapshotId'],
+ 'volume_id': snapshot['VolumeId'],
+ 'volume_size': snapshot['VolumeSize'],
+ 'tags': _tags,
+ 'snapshots': [_snapshot],
+ }
- # successful delete
- module.exit_json(changed=True)
+ module.exit_json(changed=changed, **results)
- if last_snapshot_min_age > 0:
- try:
- current_snapshots = ec2.get_all_snapshots(filters={'volume_id': volume_id})
- except boto.exception.BotoServerError as e:
- module.fail_json_aws(e)
- last_snapshot_min_age = last_snapshot_min_age * 60 # Convert to seconds
- snapshot = _get_most_recent_snapshot(current_snapshots,
- max_snapshot_age_secs=last_snapshot_min_age)
+def delete_snapshot(module, ec2, snapshot_id):
+ if module.check_mode:
+ try:
+ _describe_snapshots(ec2, SnapshotIds=[(snapshot_id)])
+ module.exit_json(changed=True, msg='Would have deleted snapshot if not in check mode')
+ except is_boto3_error_code('InvalidSnapshot.NotFound'):
+ module.exit_json(changed=False, msg='Invalid snapshot ID - snapshot not found')
try:
- # Create a new snapshot if we didn't find an existing one to use
- if snapshot is None:
- snapshot = ec2.create_snapshot(volume_id, description=description)
- changed = True
- if wait:
- if not _create_with_wait(snapshot, wait_timeout):
- module.fail_json(msg='Timed out while creating snapshot.')
- if snapshot_tags:
- for k, v in snapshot_tags.items():
- snapshot.add_tag(k, v)
- except boto.exception.BotoServerError as e:
- module.fail_json_aws(e)
-
- module.exit_json(changed=changed,
- snapshot_id=snapshot.id,
- volume_id=snapshot.volume_id,
- volume_size=snapshot.volume_size,
- tags=snapshot.tags.copy())
+ ec2.delete_snapshot(aws_retry=True, SnapshotId=snapshot_id)
+ except is_boto3_error_code('InvalidSnapshot.NotFound'):
+ module.exit_json(changed=False)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to delete snapshot")
+
+ # successful delete
+ module.exit_json(changed=True)
def create_snapshot_ansible_module():
@@ -274,21 +347,39 @@
snapshot_id=dict(),
device_name=dict(),
wait=dict(type='bool', default=True),
- wait_timeout=dict(type='int', default=0),
+ wait_timeout=dict(type='int', default=600),
last_snapshot_min_age=dict(type='int', default=0),
snapshot_tags=dict(type='dict', default=dict()),
state=dict(choices=['absent', 'present'], default='present'),
)
- module = AnsibleAWSModule(argument_spec=argument_spec, check_boto3=False)
+ mutually_exclusive = [
+ ('instance_id', 'snapshot_id', 'volume_id'),
+ ]
+ required_if = [
+ ('state', 'absent', ('snapshot_id',)),
+ ]
+ required_one_of = [
+ ('instance_id', 'snapshot_id', 'volume_id'),
+ ]
+ required_together = [
+ ('instance_id', 'device_name'),
+ ]
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ mutually_exclusive=mutually_exclusive,
+ required_if=required_if,
+ required_one_of=required_one_of,
+ required_together=required_together,
+ supports_check_mode=True,
+ )
+
return module
def main():
module = create_snapshot_ansible_module()
- if not HAS_BOTO:
- module.fail_json(msg='boto required for this module')
-
volume_id = module.params.get('volume_id')
snapshot_id = module.params.get('snapshot_id')
description = module.params.get('description')
@@ -300,22 +391,28 @@
snapshot_tags = module.params.get('snapshot_tags')
state = module.params.get('state')
- ec2 = ec2_connect(module)
+ ec2 = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff(retries=10))
- create_snapshot(
- module=module,
- state=state,
- description=description,
- wait=wait,
- wait_timeout=wait_timeout,
- ec2=ec2,
- volume_id=volume_id,
- instance_id=instance_id,
- snapshot_id=snapshot_id,
- device_name=device_name,
- snapshot_tags=snapshot_tags,
- last_snapshot_min_age=last_snapshot_min_age
- )
+ if state == 'absent':
+ delete_snapshot(
+ module=module,
+ ec2=ec2,
+ snapshot_id=snapshot_id,
+ )
+ else:
+ create_snapshot(
+ module=module,
+ description=description,
+ wait=wait,
+ wait_timeout=wait_timeout,
+ ec2=ec2,
+ volume_id=volume_id,
+ instance_id=instance_id,
+ snapshot_id=snapshot_id,
+ device_name=device_name,
+ snapshot_tags=snapshot_tags,
+ last_snapshot_min_age=last_snapshot_min_age,
+ )
if __name__ == '__main__':
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance_info.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,167 @@
+#!/usr/bin/python
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://wwww.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = '''
+---
+module: ec2_spot_instance_info
+version_added: 2.0.0
+short_description: Gather information about ec2 spot instance requests
+description:
+ - Describes the specified Spot Instance requests.
+author:
+ - Mandar Vijay Kulkarni (@mandar242)
+options:
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ - Filter names and values are case sensitive.
+ - See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSpotInstanceRequests.html) for possible filters.
+ required: false
+ default: {}
+ type: dict
+ spot_instance_request_ids:
+ description:
+ - One or more Spot Instance request IDs.
+ required: false
+ type: list
+ elements: str
+
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+'''
+
+EXAMPLES = '''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: describe the Spot Instance requests based on request IDs
+ amazon.aws.ec2_spot_instance_info:
+ spot_instance_request_ids:
+ - sir-12345678
+
+- name: describe the Spot Instance requests and filter results based on instance type
+ amazon.aws.ec2_spot_instance_info:
+ spot_instance_request_ids:
+ - sir-12345678
+ - sir-13579246
+ - sir-87654321
+ filters:
+ launch.instance-type: t3.medium
+
+- name: describe the Spot requests filtered using multiple filters
+ amazon.aws.ec2_spot_instance_info:
+ filters:
+ state: active
+ launch.block-device-mapping.device-name: /dev/sdb
+
+'''
+
+RETURN = '''
+spot_request:
+ description: The gathered information about specified spot instance requests.
+ returned: when success
+ type: dict
+ sample: {
+ "create_time": "2021-09-01T21:05:57+00:00",
+ "instance_id": "i-08877936b801ac475",
+ "instance_interruption_behavior": "terminate",
+ "launch_specification": {
+ "ebs_optimized": false,
+ "image_id": "ami-0443305dabd4be2bc",
+ "instance_type": "t2.medium",
+ "key_name": "zuul",
+ "monitoring": {
+ "enabled": false
+ },
+ "placement": {
+ "availability_zone": "us-east-2b"
+ },
+ "security_groups": [
+ {
+ "group_id": "sg-01f9833207d53b937",
+ "group_name": "default"
+ }
+ ],
+ "subnet_id": "subnet-07d906b8358869bda"
+ },
+ "launched_availability_zone": "us-east-2b",
+ "product_description": "Linux/UNIX",
+ "spot_instance_request_id": "sir-c3cp9jsk",
+ "spot_price": "0.046400",
+ "state": "active",
+ "status": {
+ "code": "fulfilled",
+ "message": "Your spot request is fulfilled.",
+ "update_time": "2021-09-01T21:05:59+00:00"
+ },
+ "tags": {},
+ "type": "one-time",
+ "valid_until": "2021-09-08T21:05:57+00:00"
+ }
+'''
+
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+
+
+def _describe_spot_instance_requests(connection, **params):
+ paginator = connection.get_paginator('describe_spot_instance_requests')
+ return paginator.paginate(**params).build_full_result()
+
+
+def describe_spot_instance_requests(connection, module):
+
+ params = {}
+
+ if module.params.get('filters'):
+ params['Filters'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
+ if module.params.get('spot_instance_request_ids'):
+ params['SpotInstanceRequestIds'] = module.params.get('spot_instance_request_ids')
+
+ try:
+ describe_spot_instance_requests_response = _describe_spot_instance_requests(connection, **params)['SpotInstanceRequests']
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to describe spot instance requests')
+
+ spot_request = []
+ for response_list_item in describe_spot_instance_requests_response:
+ spot_request.append(camel_dict_to_snake_dict(response_list_item))
+
+ if len(spot_request) == 0:
+ module.exit_json(msg='No spot requests found for specified options')
+
+ module.exit_json(spot_request=spot_request)
+
+
+def main():
+
+ argument_spec = dict(
+ filters=dict(default={}, type='dict'),
+ spot_instance_request_ids=dict(default=[], type='list', elements='str'),
+ )
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ supports_check_mode=True
+ )
+ try:
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ describe_spot_instance_requests(connection, module)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_spot_instance.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,625 @@
+#!/usr/bin/python
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = '''
+---
+module: ec2_spot_instance
+version_added: 2.0.0
+short_description: request, stop, reboot or cancel spot instance
+description:
+ - Creates or cancels spot instance requests.
+author:
+ - Sri Rachana Achyuthuni (@srirachanaachyuthuni)
+options:
+ zone_group:
+ description:
+ - Name for logical grouping of spot requests.
+ - All spot instances in the request are launched in the same availability zone.
+ type: str
+ client_token:
+ description: The idempotency token you provided when you launched the instance, if applicable.
+ type: str
+ count:
+ description:
+ - Number of instances to launch.
+ default: 1
+ type: int
+ interruption:
+ description:
+ - The behavior when a Spot Instance is interrupted.
+ choices: [ "hibernate", "stop", "terminate" ]
+ type: str
+ default: terminate
+ launch_group:
+ description:
+ - Launch group for spot requests, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/how-spot-instances-work.html#spot-launch-group).
+ type: str
+ launch_specification:
+ description:
+ - The launch specification.
+ type: dict
+ suboptions:
+ security_group_ids:
+ description:
+ - Security group id (or list of ids) to use with the instance.
+ type: list
+ elements: str
+ security_groups:
+ description:
+ - Security group name (or list of group names) to use with the instance.
+ - Only supported with EC2 Classic. To launch in a VPC, use C(group_id)
+ type: list
+ elements: str
+ key_name:
+ description:
+ - Key to use on the instance.
+ - The SSH key must already exist in AWS in order to use this argument.
+ - Keys can be created / deleted using the M(amazon.aws.ec2_key) module.
+ type: str
+ subnet_id:
+ description:
+ - The ID of the subnet in which to launch the instance.
+ type: str
+ user_data:
+ description:
+ - The base64-encoded user data for the instance. User data is limited to 16 KB.
+ type: str
+ block_device_mappings:
+ description:
+ - A list of hash/dictionaries of volumes to add to the new instance.
+ type: list
+ elements: dict
+ suboptions:
+ device_name:
+ description:
+ - The device name (for example, /dev/sdh or xvdh ).
+ type: str
+ virtual_name:
+ description:
+ - The virtual device name
+ type: str
+ ebs:
+ description:
+ - Parameters used to automatically set up EBS volumes when the instance is launched,
+ see U(https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.request_spot_instances)
+ type: dict
+ no_device:
+ description:
+ - To omit the device from the block device mapping, specify an empty string.
+ type: str
+ ebs_optimized:
+ description:
+ - Whether instance is using optimized EBS volumes, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html).
+ default: false
+ type: bool
+ iam_instance_profile:
+ description:
+ - The IAM instance profile.
+ type: dict
+ suboptions:
+ arn:
+ description:
+ - The Amazon Resource Name (ARN) of the instance profile.
+ - Only one of I(arn) or I(name) may be specified.
+ type: str
+ name:
+ description:
+ - The name of the instance profile.
+ - Only one of I(arn) or I(name) may be specified.
+ type: str
+ image_id:
+ description:
+ - The ID of the AMI.
+ type: str
+ instance_type:
+ description:
+ - Instance type to use for the instance, see U(https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html).
+ - Required when creating a new instance.
+ type: str
+ kernel_id:
+ description:
+ - The ID of the kernel.
+ type: str
+ network_interfaces:
+ description:
+ - One or more network interfaces. If you specify a network interface, you must specify subnet IDs and security group IDs using the network interface.
+ type: list
+ elements: dict
+ suboptions:
+ associate_public_ip_address:
+ description:
+ - Indicates whether to assign a public IPv4 address to an instance you launch in a VPC.
+ type: bool
+ delete_on_termination:
+ description:
+ - If set to true , the interface is deleted when the instance is terminated.
+ You can specify true only if creating a new network interface when launching an instance.
+ type: bool
+ description:
+ description:
+ - The description of the network interface. Applies only if creating a network interface when launching an instance.
+ type: str
+ device_index:
+ description:
+ - The position of the network interface in the attachment order. A primary network interface has a device index of 0.
+ - If you specify a network interface when launching an instance, you must specify the device index.
+ type: int
+ groups:
+ description:
+ - The IDs of the security groups for the network interface. Applies only if creating a network interface when launching an instance.
+ type: list
+ elements: str
+ ipv6_address_count:
+ description:
+ - A number of IPv6 addresses to assign to the network interface
+ type: int
+ ipv6_addresses:
+ description:
+ - One or more IPv6 addresses to assign to the network interface.
+ type: list
+ elements: dict
+ suboptions:
+ ipv6address:
+ description: The IPv6 address.
+ type: str
+ network_interface_id:
+ description:
+ - The ID of the network interface.
+ type: str
+ private_ip_address:
+ description:
+ - The private IPv4 address of the network interface
+ type: str
+ private_ip_addresses:
+ description:
+ - One or more private IPv4 addresses to assign to the network interface
+ type: list
+ elements: dict
+ secondary_private_ip_address_count:
+ description:
+ - The number of secondary private IPv4 addresses.
+ type: int
+ subnet_id:
+ description:
+ - The ID of the subnet associated with the network interface
+ type: str
+ associate_carrier_ip_address:
+ description:
+ - Indicates whether to assign a carrier IP address to the network interface.
+ type: bool
+ interface_type:
+ description:
+ - The type of network interface.
+ type: str
+ choices: ['interface', 'efa']
+ network_card_index:
+ description:
+ - The index of the network card.
+ type: int
+ ipv4_prefixes:
+ description:
+ - One or more IPv4 delegated prefixes to be assigned to the network interface.
+ type: list
+ elements: dict
+ ipv4_prefix_count:
+ description:
+ - The number of IPv4 delegated prefixes to be automatically assigned to the network interface
+ type: int
+ ipv6_prefixes:
+ description:
+ - One or more IPv6 delegated prefixes to be assigned to the network interface
+ type: list
+ elements: dict
+ ipv6_prefix_count:
+ description:
+ - The number of IPv6 delegated prefixes to be automatically assigned to the network interface
+ type: int
+ placement:
+ description:
+ - The placement information for the instance.
+ type: dict
+ suboptions:
+ availability_zone:
+ description:
+ - The Availability Zone.
+ type: str
+ group_name:
+ description:
+ - The name of the placement group.
+ type: str
+ tenancy:
+ description:
+ - the tenancy of the host
+ type: str
+ choices: ['default', 'dedicated', 'host']
+ default: default
+ ramdisk_id:
+ description:
+ - The ID of the RAM disk.
+ type: str
+ monitoring:
+ description:
+ - Indicates whether basic or detailed monitoring is enabled for the instance.
+ type: dict
+ suboptions:
+ enabled:
+ description:
+ - Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring is enabled.
+ type: bool
+ default: false
+ state:
+ description:
+ - Whether the spot request should be created or removed.
+ - When I(state=present), I(launch_specification) is required.
+ - When I(state=absent), I(spot_instance_request_ids) is required.
+ default: 'present'
+ choices: [ 'absent', 'present' ]
+ type: str
+ spot_price:
+ description:
+ - Maximum spot price to bid. If not set, a regular on-demand instance is requested.
+ - A spot request is made with this maximum bid. When it is filled, the instance is started.
+ type: str
+ spot_type:
+ description:
+ - The type of spot request.
+ - After being interrupted a C(persistent) spot instance will be started once there is capacity to fill the request again.
+ default: 'one-time'
+ choices: [ "one-time", "persistent" ]
+ type: str
+ tags:
+ description:
+ - A dictionary of key-value pairs for tagging the Spot Instance request on creation.
+ type: dict
+ spot_instance_request_ids:
+ description:
+ - List of strings with IDs of spot requests to be cancelled
+ default: []
+ type: list
+ elements: str
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+'''
+
+EXAMPLES = '''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Simple Spot Request Creation
+ amazon.aws.ec2_spot_instance:
+ launch_specification:
+ image_id: ami-123456789
+ key_name: my-keypair
+ instance_type: t2.medium
+
+- name: Spot Request Creation with more options
+ amazon.aws.ec2_spot_instance:
+ launch_specification:
+ image_id: ami-123456789
+ key_name: my-keypair
+ instance_type: t2.medium
+ subnet_id: subnet-12345678
+ block_device_mappings:
+ - device_name: /dev/sdb
+ ebs:
+ delete_on_termination: True
+ volume_type: gp3
+ volume_size: 5
+ - device_name: /dev/sdc
+ ebs:
+ delete_on_termination: True
+ volume_type: io2
+ volume_size: 30
+ network_interfaces:
+ - associate_public_ip_address: False
+ delete_on_termination: True
+ device_index: 0
+ placement:
+ availability_zone: us-west-2a
+ monitoring:
+ enabled: False
+ spot_price: 0.002
+ tags:
+ Environment: Testing
+
+- name: Spot Request Termination
+ amazon.aws.ec2_spot_instance:
+ spot_instance_request_ids: ['sir-12345678', 'sir-abcdefgh']
+ state: absent
+'''
+
+RETURN = '''
+spot_request:
+ description: The spot instance request details after creation
+ returned: when success
+ type: dict
+ sample: {
+ "create_time": "2021-08-23T22:59:12+00:00",
+ "instance_interruption_behavior": "terminate",
+ "launch_specification": {
+ "block_device_mappings": [
+ {
+ "device_name": "/dev/sdb",
+ "ebs": {
+ "delete_on_termination": true,
+ "volume_size": 5,
+ "volume_type": "gp3"
+ }
+ }
+ ],
+ "ebs_optimized": false,
+ "iam_instance_profile": {
+ "arn": "arn:aws:iam::EXAMPLE:instance-profile/myinstanceprofile"
+ },
+ "image_id": "ami-083ac7c7ecf9bb9b0",
+ "instance_type": "t2.small",
+ "key_name": "mykey",
+ "monitoring": {
+ "enabled": false
+ },
+ "network_interfaces": [
+ {
+ "associate_public_ip_address": false,
+ "delete_on_termination": true,
+ "device_index": 0
+ }
+ ],
+ "placement": {
+ "availability_zone": "us-west-2a",
+ "tenancy": "default"
+ },
+ "security_groups": [
+ {
+ "group_name": "default"
+ }
+ ]
+ },
+ "product_description": "Linux/UNIX",
+ "spot_instance_request_id": "sir-1234abcd",
+ "spot_price": "0.00600",
+ "state": "open",
+ "status": {
+ "code": "pending-evaluation",
+ "message": "Your Spot request has been submitted for review, and is pending evaluation.",
+ "update_time": "2021-08-23T22:59:12+00:00"
+ },
+ "type": "one-time"
+
+ }
+
+cancelled_spot_request:
+ description: The spot instance request details that has been cancelled
+ returned: always
+ type: str
+ sample: 'Spot requests with IDs: sir-1234abcd have been cancelled'
+'''
+# TODO: add support for datetime-based parameters
+# import datetime
+# import time
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+from ..module_utils.core import AnsibleAWSModule
+from ..module_utils.ec2 import AWSRetry
+from ansible.module_utils.common.dict_transformations import snake_dict_to_camel_dict
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
+from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ..module_utils.core import is_boto3_error_code
+
+
+def build_launch_specification(launch_spec):
+ """
+ Remove keys that have a value of None from Launch Specification
+ Descend into these subkeys:
+ network_interfaces
+ block_device_mappings
+ monitoring
+ placement
+ iam_instance_profile
+ """
+ assigned_keys = dict((k, v) for k, v in launch_spec.items() if v is not None)
+
+ sub_key_to_build = ['placement', 'iam_instance_profile', 'monitoring']
+ for subkey in sub_key_to_build:
+ if launch_spec[subkey] is not None:
+ assigned_keys[subkey] = dict((k, v) for k, v in launch_spec[subkey].items() if v is not None)
+
+ if launch_spec['network_interfaces'] is not None:
+ interfaces = []
+ for iface in launch_spec['network_interfaces']:
+ interfaces.append(dict((k, v) for k, v in iface.items() if v is not None))
+ assigned_keys['network_interfaces'] = interfaces
+
+ if launch_spec['block_device_mappings'] is not None:
+ block_devs = []
+ for dev in launch_spec['block_device_mappings']:
+ block_devs.append(
+ dict((k, v) for k, v in dev.items() if v is not None))
+ assigned_keys['block_device_mappings'] = block_devs
+
+ return snake_dict_to_camel_dict(assigned_keys, capitalize_first=True)
+
+
+def request_spot_instances(module, connection):
+
+ # connection.request_spot_instances() always creates a new spot request
+ changed = True
+
+ if module.check_mode:
+ module.exit_json(changed=changed)
+
+ params = {}
+
+ if module.params.get('launch_specification'):
+ params['LaunchSpecification'] = build_launch_specification(module.params.get('launch_specification'))
+
+ if module.params.get('zone_group'):
+ params['AvailabilityZoneGroup'] = module.params.get('zone_group')
+
+ if module.params.get('count'):
+ params['InstanceCount'] = module.params.get('count')
+
+ if module.params.get('launch_group'):
+ params['LaunchGroup'] = module.params.get('launch_group')
+
+ if module.params.get('spot_price'):
+ params['SpotPrice'] = module.params.get('spot_price')
+
+ if module.params.get('spot_type'):
+ params['Type'] = module.params.get('spot_type')
+
+ if module.params.get('client_token'):
+ params['ClientToken'] = module.params.get('client_token')
+
+ if module.params.get('interruption'):
+ params['InstanceInterruptionBehavior'] = module.params.get('interruption')
+
+ if module.params.get('tags'):
+ params['TagSpecifications'] = [{
+ 'ResourceType': 'spot-instances-request',
+ 'Tags': ansible_dict_to_boto3_tag_list(module.params.get('tags')),
+ }]
+
+ # TODO: add support for datetime-based parameters
+ # params['ValidFrom'] = module.params.get('valid_from')
+ # params['ValidUntil'] = module.params.get('valid_until')
+
+ try:
+ request_spot_instance_response = (connection.request_spot_instances(aws_retry=True, **params))['SpotInstanceRequests'][0]
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Error while creating the spot instance request')
+
+ request_spot_instance_response['Tags'] = boto3_tag_list_to_ansible_dict(request_spot_instance_response.get('Tags', []))
+ spot_request = camel_dict_to_snake_dict(request_spot_instance_response, ignore_list=['Tags'])
+ module.exit_json(spot_request=spot_request, changed=changed)
+
+
+def cancel_spot_instance_requests(module, connection):
+
+ changed = False
+ spot_instance_request_ids = module.params.get('spot_instance_request_ids')
+ requests_exist = dict()
+ try:
+ paginator = connection.get_paginator('describe_spot_instance_requests').paginate(SpotInstanceRequestIds=spot_instance_request_ids,
+ Filters=[{'Name': 'state', 'Values': ['open', 'active']}])
+ jittered_retry = AWSRetry.jittered_backoff()
+ requests_exist = jittered_retry(paginator.build_full_result)()
+ except is_boto3_error_code('InvalidSpotInstanceRequestID.NotFound'):
+ requests_exist['SpotInstanceRequests'] = []
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failure when describing spot requests")
+
+ try:
+ if len(requests_exist['SpotInstanceRequests']) > 0:
+ changed = True
+ if module.check_mode:
+ module.exit_json(changed=changed,
+ msg='Would have cancelled Spot request {0}'.format(spot_instance_request_ids))
+
+ connection.cancel_spot_instance_requests(aws_retry=True, SpotInstanceRequestIds=module.params.get('spot_instance_request_ids'))
+ module.exit_json(changed=changed, msg='Cancelled Spot request {0}'.format(module.params.get('spot_instance_request_ids')))
+ else:
+ module.exit_json(changed=changed, msg='Spot request not found or already cancelled')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Error while cancelling the spot instance request')
+
+
+def main():
+ network_interface_options = dict(
+ associate_public_ip_address=dict(type='bool'),
+ delete_on_termination=dict(type='bool'),
+ description=dict(type='str'),
+ device_index=dict(type='int'),
+ groups=dict(type='list', elements='str'),
+ ipv6_address_count=dict(type='int'),
+ ipv6_addresses=dict(type='list', elements='dict', options=dict(ipv6address=dict(type='str'))),
+ network_interface_id=dict(type='str'),
+ private_ip_address=dict(type='str'),
+ private_ip_addresses=dict(type='list', elements='dict'),
+ secondary_private_ip_address_count=dict(type='int'),
+ subnet_id=dict(type='str'),
+ associate_carrier_ip_address=dict(type='bool'),
+ interface_type=dict(type='str', choices=['interface', 'efa']),
+ network_card_index=dict(type='int'),
+ ipv4_prefixes=dict(type='list', elements='dict'),
+ ipv4_prefix_count=dict(type='int'),
+ ipv6_prefixes=dict(type='list', elements='dict'),
+ ipv6_prefix_count=dict(type='int')
+ )
+ block_device_mappings_options = dict(
+ device_name=dict(type='str'),
+ virtual_name=dict(type='str'),
+ ebs=dict(type='dict'),
+ no_device=dict(type='str'),
+ )
+ monitoring_options = dict(
+ enabled=dict(type='bool', default=False)
+ )
+ placement_options = dict(
+ availability_zone=dict(type='str'),
+ group_name=dict(type='str'),
+ tenancy=dict(type='str', choices=['default', 'dedicated', 'host'], default='default')
+ )
+ iam_instance_profile_options = dict(
+ arn=dict(type='str'),
+ name=dict(type='str')
+ )
+ launch_specification_options = dict(
+ security_group_ids=dict(type='list', elements='str'),
+ security_groups=dict(type='list', elements='str'),
+ block_device_mappings=dict(type='list', elements='dict', options=block_device_mappings_options),
+ ebs_optimized=dict(type='bool', default=False),
+ iam_instance_profile=dict(type='dict', options=iam_instance_profile_options),
+ image_id=dict(type='str'),
+ instance_type=dict(type='str'),
+ kernel_id=dict(type='str'),
+ key_name=dict(type='str'),
+ monitoring=dict(type='dict', options=monitoring_options),
+ network_interfaces=dict(type='list', elements='dict', options=network_interface_options, default=[]),
+ placement=dict(type='dict', options=placement_options),
+ ramdisk_id=dict(type='str'),
+ user_data=dict(type='str'),
+ subnet_id=dict(type='str')
+ )
+
+ argument_spec = dict(
+ zone_group=dict(type='str'),
+ client_token=dict(type='str', no_log=False),
+ count=dict(type='int', default=1),
+ interruption=dict(type='str', default="terminate", choices=['hibernate', 'stop', 'terminate']),
+ launch_group=dict(type='str'),
+ launch_specification=dict(type='dict', options=launch_specification_options),
+ state=dict(default='present', choices=['present', 'absent']),
+ spot_price=dict(type='str'),
+ spot_type=dict(default='one-time', choices=["one-time", "persistent"]),
+ tags=dict(type='dict'),
+ # valid_from=dict(type='datetime', default=datetime.datetime.now()),
+ # valid_until=dict(type='datetime', default=(datetime.datetime.now() + datetime.timedelta(minutes=60))
+ spot_instance_request_ids=dict(type='list', elements='str'),
+ )
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ supports_check_mode=True
+ )
+
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+
+ state = module.params['state']
+
+ if state == 'present':
+ request_spot_instances(module, connection)
+
+ if state == 'absent':
+ cancel_spot_instance_requests(module, connection)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,7 +15,6 @@
- Lists tags for any EC2 resource.
- Resources are referenced by their resource id (e.g. an instance being i-XXXXXXX, a vpc being vpc-XXXXXX).
- Resource tags can be managed using the M(amazon.aws.ec2_tag) module.
-requirements: [ "boto3", "botocore" ]
options:
resource:
description:
@@ -52,11 +51,6 @@
type: dict
'''
-try:
- from botocore.exceptions import BotoCoreError, ClientError
-except Exception:
- pass # Handled by AnsibleAWSModule
-
from ..module_utils.core import AnsibleAWSModule
from ..module_utils.ec2 import describe_ec2_tags
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_tag.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,7 +15,6 @@
- Creates, modifies and removes tags for any EC2 resource.
- Resources are referenced by their resource id (for example, an instance being i-XXXXXXX, a VPC being vpc-XXXXXXX).
- This module is designed to be used with complex args (tags), see the examples.
-requirements: [ "boto3", "botocore" ]
options:
resource:
description:
@@ -114,15 +113,7 @@
type: dict
'''
-try:
- from botocore.exceptions import BotoCoreError, ClientError
-except ImportError:
- pass # Handled by AnsibleAWSModule
-
from ..module_utils.core import AnsibleAWSModule
-from ..module_utils.ec2 import AWSRetry
-from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
-from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
from ..module_utils.ec2 import compare_aws_tags
from ..module_utils.ec2 import describe_ec2_tags
from ..module_utils.ec2 import ensure_ec2_tags
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about ec2 volumes in AWS.
- This module was called C(ec2_vol_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author: "Rob White (@wimnat)"
options:
filters:
@@ -49,6 +48,15 @@
filters:
attachment.status: attached
+# Gather information about all volumes related to an EC2 Instance
+# register information to `volumes` variable
+# Replaces functionality of `amazon.aws.ec2_vol` - `state: list`
+- name: get volume(s) info from EC2 Instance
+ amazon.aws.ec2_vol_info:
+ filters:
+ attachment.instance-id: "i-000111222333"
+ register: volumes
+
'''
RETURN = '''
@@ -59,15 +67,18 @@
returned: always
contains:
attachment_set:
- description: Information about the volume attachments.
- type: dict
- sample: {
+ description:
+ - Information about the volume attachments.
+ - This was changed in version 2.0.0 from a dictionary to a list of dictionaries.
+ type: list
+ elements: dict
+ sample: [{
"attach_time": "2015-10-23T00:22:29.000Z",
"deleteOnTermination": "false",
"device": "/dev/sdf",
"instance_id": "i-8356263c",
"status": "attached"
- }
+ }]
create_time:
description: The time stamp when volume creation was initiated.
type: str
@@ -110,6 +121,10 @@
description: The Availability Zone of the volume.
type: str
sample: "us-east-1b"
+ throughput:
+ description: The throughput that the volume supports, in MiB/s.
+ type: int
+ sample: 131
'''
try:
@@ -129,6 +144,16 @@
attachment = volume["attachments"]
+ attachment_data = []
+ for data in volume["attachments"]:
+ attachment_data.append({
+ 'attach_time': data.get('attach_time', None),
+ 'device': data.get('device', None),
+ 'instance_id': data.get('instance_id', None),
+ 'status': data.get('state', None),
+ 'delete_on_termination': data.get('delete_on_termination', None)
+ })
+
volume_info = {
'create_time': volume["create_time"],
'id': volume["volume_id"],
@@ -140,16 +165,13 @@
'type': volume["volume_type"],
'zone': volume["availability_zone"],
'region': region,
- 'attachment_set': {
- 'attach_time': attachment[0]["attach_time"] if len(attachment) > 0 else None,
- 'device': attachment[0]["device"] if len(attachment) > 0 else None,
- 'instance_id': attachment[0]["instance_id"] if len(attachment) > 0 else None,
- 'status': attachment[0]["state"] if len(attachment) > 0 else None,
- 'delete_on_termination': attachment[0]["delete_on_termination"] if len(attachment) > 0 else None
- },
+ 'attachment_set': attachment_data,
'tags': boto3_tag_list_to_ansible_dict(volume['tags']) if "tags" in volume else None
}
+ if 'throughput' in volume:
+ volume_info['throughput'] = volume["throughput"]
+
return volume_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about ec2 volumes in AWS.
- This module was called C(ec2_vol_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author: "Rob White (@wimnat)"
options:
filters:
@@ -49,6 +48,15 @@
filters:
attachment.status: attached
+# Gather information about all volumes related to an EC2 Instance
+# register information to `volumes` variable
+# Replaces functionality of `amazon.aws.ec2_vol` - `state: list`
+- name: get volume(s) info from EC2 Instance
+ amazon.aws.ec2_vol_info:
+ filters:
+ attachment.instance-id: "i-000111222333"
+ register: volumes
+
'''
RETURN = '''
@@ -59,15 +67,18 @@
returned: always
contains:
attachment_set:
- description: Information about the volume attachments.
- type: dict
- sample: {
+ description:
+ - Information about the volume attachments.
+ - This was changed in version 2.0.0 from a dictionary to a list of dictionaries.
+ type: list
+ elements: dict
+ sample: [{
"attach_time": "2015-10-23T00:22:29.000Z",
"deleteOnTermination": "false",
"device": "/dev/sdf",
"instance_id": "i-8356263c",
"status": "attached"
- }
+ }]
create_time:
description: The time stamp when volume creation was initiated.
type: str
@@ -110,6 +121,10 @@
description: The Availability Zone of the volume.
type: str
sample: "us-east-1b"
+ throughput:
+ description: The throughput that the volume supports, in MiB/s.
+ type: int
+ sample: 131
'''
try:
@@ -129,6 +144,16 @@
attachment = volume["attachments"]
+ attachment_data = []
+ for data in volume["attachments"]:
+ attachment_data.append({
+ 'attach_time': data.get('attach_time', None),
+ 'device': data.get('device', None),
+ 'instance_id': data.get('instance_id', None),
+ 'status': data.get('state', None),
+ 'delete_on_termination': data.get('delete_on_termination', None)
+ })
+
volume_info = {
'create_time': volume["create_time"],
'id': volume["volume_id"],
@@ -140,16 +165,13 @@
'type': volume["volume_type"],
'zone': volume["availability_zone"],
'region': region,
- 'attachment_set': {
- 'attach_time': attachment[0]["attach_time"] if len(attachment) > 0 else None,
- 'device': attachment[0]["device"] if len(attachment) > 0 else None,
- 'instance_id': attachment[0]["instance_id"] if len(attachment) > 0 else None,
- 'status': attachment[0]["state"] if len(attachment) > 0 else None,
- 'delete_on_termination': attachment[0]["delete_on_termination"] if len(attachment) > 0 else None
- },
+ 'attachment_set': attachment_data,
'tags': boto3_tag_list_to_ansible_dict(volume['tags']) if "tags" in volume else None
}
+ if 'throughput' in volume:
+ volume_info['throughput'] = volume["throughput"]
+
return volume_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vol.py 2021-11-12 18:13:53.000000000 +0000
@@ -92,7 +92,7 @@
version_added: 1.5.0
modify_volume:
description:
- - The volume won't be modify unless this key is C(true).
+ - The volume won't be modified unless this key is C(true).
type: bool
default: false
version_added: 1.4.0
@@ -101,14 +101,20 @@
- Volume throughput in MB/s.
- This parameter is only valid for gp3 volumes.
- Valid range is from 125 to 1000.
+ - Requires at least botocore version 1.19.27.
type: int
version_added: 1.4.0
+ multi_attach:
+ description:
+ - If set to C(yes), Multi-Attach will be enabled when creating the volume.
+ - When you create a new volume, Multi-Attach is disabled by default.
+ - This parameter is supported with io1 and io2 volumes only.
+ type: bool
+ version_added: 2.0.0
author: "Lester Wade (@lwade)"
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
-
-requirements: [ boto3>=1.16.33 ]
'''
EXAMPLES = '''
@@ -148,7 +154,6 @@
# Example: Launch an instance and then add a volume if not already attached
# * Volume will be created with the given name if not already created.
# * Nothing will happen if the volume is already attached.
-# * Requires Ansible 2.0
- amazon.aws.ec2:
keypair: "{{ keypair }}"
@@ -190,6 +195,14 @@
volume_type: gp2
device_name: /dev/xvdf
+# Create new volume with multi-attach enabled
+- amazon.aws.ec2_vol:
+ zone: XXXXXX
+ multi_attach: true
+ volume_size: 4
+ volume_type: io1
+ iops: 102
+
# Attach an existing volume to instance. The volume will be deleted upon instance termination.
- amazon.aws.ec2_vol:
instance: XXXXXX
@@ -219,13 +232,13 @@
returned: when success
type: str
sample: {
- "attachment_set": {
+ "attachment_set": [{
"attach_time": "2015-10-23T00:22:29.000Z",
"deleteOnTermination": "false",
"device": "/dev/sdf",
"instance_id": "i-8356263c",
"status": "attached"
- },
+ }],
"create_time": "2015-10-21T14:36:08.870Z",
"encrypted": false,
"id": "vol-35b333d9",
@@ -247,8 +260,6 @@
from ..module_utils.ec2 import camel_dict_to_snake_dict
from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
from ..module_utils.ec2 import ansible_dict_to_boto3_filter_list
-from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
-from ..module_utils.ec2 import compare_aws_tags
from ..module_utils.ec2 import describe_ec2_tags
from ..module_utils.ec2 import ensure_ec2_tags
from ..module_utils.ec2 import AWSRetry
@@ -409,15 +420,27 @@
throughput_changed = True
req_obj['Throughput'] = target_throughput
- changed = iops_changed or size_changed or type_changed or throughput_changed
+ target_multi_attach = module.params.get('multi_attach')
+ multi_attach_changed = False
+ if target_multi_attach is not None:
+ original_multi_attach = volume['multi_attach_enabled']
+ if target_multi_attach != original_multi_attach:
+ multi_attach_changed = True
+ req_obj['MultiAttachEnabled'] = target_multi_attach
+
+ changed = iops_changed or size_changed or type_changed or throughput_changed or multi_attach_changed
if changed:
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have updated volume if not in check mode.')
response = ec2_conn.modify_volume(**req_obj)
volume['size'] = response.get('VolumeModification').get('TargetSize')
volume['volume_type'] = response.get('VolumeModification').get('TargetVolumeType')
volume['iops'] = response.get('VolumeModification').get('TargetIops')
- volume['throughput'] = response.get('VolumeModification').get('TargetThroughput')
+ volume['multi_attach_enabled'] = response.get('VolumeModification').get('TargetMultiAttachEnabled')
+ if module.botocore_at_least("1.19.27"):
+ volume['throughput'] = response.get('VolumeModification').get('TargetThroughput')
return volume, changed
@@ -431,9 +454,13 @@
volume_type = module.params.get('volume_type')
snapshot = module.params.get('snapshot')
throughput = module.params.get('throughput')
+ multi_attach = module.params.get('multi_attach')
volume = get_volume(module, ec2_conn)
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have created a volume if not in check mode.')
+
if volume is None:
try:
@@ -458,6 +485,8 @@
if throughput:
additional_params['Throughput'] = int(throughput)
+ if multi_attach:
+ additional_params['MultiAttachEnabled'] = True
create_vol_response = ec2_conn.create_volume(
aws_retry=True,
@@ -489,13 +518,21 @@
attachment_data = get_attachment_data(volume_dict, wanted_state='attached')
if attachment_data:
- if attachment_data.get('instance_id', None) != instance_dict['instance_id']:
- module.fail_json(msg="Volume {0} is already attached to another instance: {1}".format(volume_dict['volume_id'],
- attachment_data.get('instance_id', None)))
- else:
- return volume_dict, changed
+ if module.check_mode:
+ if attachment_data[0].get('status') in ['attached', 'attaching']:
+ module.exit_json(changed=False, msg='IN CHECK MODE - volume already attached to instance: {0}.'.format(
+ attachment_data[0].get('instance_id', None)))
+ if not volume_dict['multi_attach_enabled']:
+ # volumes without MultiAttach Enabled can be attached to 1 instance only
+ if attachment_data[0].get('instance_id', None) != instance_dict['instance_id']:
+ module.fail_json(msg="Volume {0} is already attached to another instance: {1}."
+ .format(volume_dict['volume_id'], attachment_data[0].get('instance_id', None)))
+ else:
+ return volume_dict, changed
try:
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have attached volume if not in check mode.')
attach_response = ec2_conn.attach_volume(aws_retry=True, Device=device_name,
InstanceId=instance_dict['instance_id'],
VolumeId=volume_dict['volume_id'])
@@ -557,17 +594,22 @@
def get_attachment_data(volume_dict, wanted_state=None):
changed = False
- attachment_data = {}
+ attachment_data = []
if not volume_dict:
return attachment_data
- for data in volume_dict.get('attachments', []):
- if wanted_state and wanted_state == data['state']:
- attachment_data = data
- break
- else:
- # No filter, return first
- attachment_data = data
- break
+ resource = volume_dict.get('attachments', [])
+ if wanted_state:
+ # filter 'state', return attachment matching wanted state
+ resource = [data for data in resource if data['state'] == wanted_state]
+
+ for data in resource:
+ attachment_data.append({
+ 'attach_time': data.get('attach_time', None),
+ 'device': data.get('device', None),
+ 'instance_id': data.get('instance_id', None),
+ 'status': data.get('state', None),
+ 'delete_on_termination': data.get('delete_on_termination', None)
+ })
return attachment_data
@@ -576,8 +618,11 @@
changed = False
attachment_data = get_attachment_data(volume_dict, wanted_state='attached')
- if attachment_data:
- ec2_conn.detach_volume(aws_retry=True, VolumeId=volume_dict['volume_id'])
+ # The ID of the instance must be specified if you are detaching a Multi-Attach enabled volume.
+ for attachment in attachment_data:
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have detached volume if not in check mode.')
+ ec2_conn.detach_volume(aws_retry=True, InstanceId=attachment['instance_id'], VolumeId=volume_dict['volume_id'])
waiter = ec2_conn.get_waiter('volume_available')
waiter.wait(
VolumeIds=[volume_dict['volume_id']],
@@ -588,7 +633,7 @@
return volume_dict, changed
-def get_volume_info(volume, tags=None):
+def get_volume_info(module, volume, tags=None):
if not tags:
tags = boto3_tag_list_to_ansible_dict(volume.get('tags'))
attachment_data = get_attachment_data(volume)
@@ -602,17 +647,14 @@
'status': volume.get('state'),
'type': volume.get('volume_type'),
'zone': volume.get('availability_zone'),
- 'throughput': volume.get('throughput'),
- 'attachment_set': {
- 'attach_time': attachment_data.get('attach_time', None),
- 'device': attachment_data.get('device', None),
- 'instance_id': attachment_data.get('instance_id', None),
- 'status': attachment_data.get('state', None),
- 'deleteOnTermination': attachment_data.get('delete_on_termination', None)
- },
+ 'attachment_set': attachment_data,
+ 'multi_attach_enabled': volume.get('multi_attach_enabled'),
'tags': tags
}
+ if module.botocore_at_least("1.19.27"):
+ volume_info['throughput'] = volume.get('throughput')
+
return volume_info
@@ -632,6 +674,8 @@
def ensure_tags(module, connection, res_id, res_type, tags, purge_tags):
+ if module.check_mode:
+ return {}, True
changed = ensure_ec2_tags(connection, module, res_id, res_type, tags, purge_tags, ['InvalidVolume.NotFound'])
final_tags = describe_ec2_tags(connection, module, res_id, res_type)
@@ -657,6 +701,7 @@
modify_volume=dict(default=False, type='bool'),
throughput=dict(type='int'),
purge_tags=dict(type='bool', default=False),
+ multi_attach=dict(type='bool'),
)
module = AnsibleAWSModule(
@@ -665,6 +710,7 @@
['volume_type', 'io1', ['iops']],
['volume_type', 'io2', ['iops']],
],
+ supports_check_mode=True,
)
param_id = module.params.get('id')
@@ -679,11 +725,15 @@
iops = module.params.get('iops')
volume_type = module.params.get('volume_type')
throughput = module.params.get('throughput')
+ multi_attach = module.params.get('multi_attach')
if state == 'list':
module.deprecate(
'Using the "list" state has been deprecated. Please use the ec2_vol_info module instead', date='2022-06-01', collection_name='amazon.aws')
+ if module.params.get('throughput'):
+ module.require_botocore_at_least('1.19.27', reason='to set the throughput for a volume')
+
# Ensure we have the zone or can get the zone
if instance is None and zone is None and state == 'present':
module.fail_json(msg="You must specify either instance or zone")
@@ -711,6 +761,9 @@
if throughput < 125 or throughput > 1000:
module.fail_json(msg='Throughput values must be between 125 and 1000.')
+ if multi_attach is True and volume_type not in ('io1', 'io2'):
+ module.fail_json(msg='multi_attach is only supported for io1 and io2 volumes.')
+
# Set changed flag
changed = False
@@ -721,7 +774,7 @@
vols = get_volumes(module, ec2_conn)
for v in vols:
- returned_volumes.append(get_volume_info(v))
+ returned_volumes.append(get_volume_info(module, v))
module.exit_json(changed=False, volumes=returned_volumes)
@@ -771,8 +824,6 @@
changed=False
)
- attach_state_changed = False
-
if volume:
volume, changed = update_volume(module, ec2_conn, volume)
else:
@@ -790,17 +841,19 @@
attach_changed = False
# Add device, volume_id and volume_type parameters separately to maintain backward compatibility
- volume_info = get_volume_info(volume, tags=final_tags)
+ volume_info = get_volume_info(module, volume, tags=final_tags)
if tags_changed or attach_changed:
changed = True
- module.exit_json(changed=changed, volume=volume_info, device=volume_info['attachment_set']['device'],
+ module.exit_json(changed=changed, volume=volume_info, device=device_name,
volume_id=volume_info['id'], volume_type=volume_info['type'])
elif state == 'absent':
if not name and not param_id:
module.fail_json('A volume name or id is required for deletion')
if volume:
+ if module.check_mode:
+ module.exit_json(changed=True, msg='Would have deleted volume if not in check mode.')
detach_volume(module, ec2_conn, volume_dict=volume)
changed = delete_volume(module, ec2_conn, volume_id=volume['volume_id'])
module.exit_json(changed=changed)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about dhcp options sets in AWS.
- This module was called C(ec2_vpc_dhcp_option_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author: "Nick Aslanidis (@naslanidis)"
options:
filters:
@@ -69,10 +68,68 @@
RETURN = '''
dhcp_options:
- description: The dhcp option sets for the account
+ description: The DHCP options created, associated or found
returned: always
type: list
-
+ elements: dict
+ contains:
+ dhcp_configurations:
+ description: The DHCP configuration for the option set
+ type: list
+ sample:
+ - '{"key": "ntp-servers", "values": [{"value": "10.0.0.2" , "value": "10.0.1.2"}]}'
+ - '{"key": "netbios-name-servers", "values": [{value": "10.0.0.1"}, {"value": "10.0.1.1" }]}'
+ dhcp_options_id:
+ description: The aws resource id of the primary DCHP options set created or found
+ type: str
+ sample: "dopt-0955331de6a20dd07"
+ owner_id:
+ description: The ID of the AWS account that owns the DHCP options set.
+ type: str
+ sample: 012345678912
+ tags:
+ description: The tags to be applied to a DHCP options set
+ type: list
+ sample:
+ - '{"Key": "CreatedBy", "Value": "ansible-test"}'
+ - '{"Key": "Collection", "Value": "amazon.aws"}'
+dhcp_config:
+ description: The boto2-style DHCP options created, associated or found. Provided for consistency with ec2_vpc_dhcp_option's `new_config`.
+ returned: always
+ type: list
+ contains:
+ domain-name-servers:
+ description: The IP addresses of up to four domain name servers, or AmazonProvidedDNS.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ domain-name:
+ description: The domain name for hosts in the DHCP option sets
+ returned: when available
+ type: list
+ sample:
+ - "my.example.com"
+ ntp-servers:
+ description: The IP addresses of up to four Network Time Protocol (NTP) servers.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios-name-servers:
+ description: The IP addresses of up to four NetBIOS name servers.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios-node-type:
+ description: The NetBIOS node type (1, 2, 4, or 8).
+ returned: when available
+ type: str
+ sample: 2
changed:
description: True if listing the dhcp options succeeds
type: bool
@@ -90,6 +147,7 @@
from ..module_utils.ec2 import AWSRetry
from ..module_utils.ec2 import ansible_dict_to_boto3_filter_list
from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ..module_utils.ec2 import normalize_ec2_vpc_dhcp_config
def get_dhcp_options_info(dhcp_option):
@@ -113,8 +171,9 @@
except botocore.exceptions.ClientError as e:
module.fail_json_aws(e)
- return [camel_dict_to_snake_dict(get_dhcp_options_info(option))
- for option in all_dhcp_options['DhcpOptions']]
+ normalized_config = [normalize_ec2_vpc_dhcp_config(config['DhcpConfigurations']) for config in all_dhcp_options['DhcpOptions']]
+ raw_config = [camel_dict_to_snake_dict(get_dhcp_options_info(option), ignore_list=['Tags']) for option in all_dhcp_options['DhcpOptions']]
+ return raw_config, normalized_config
def main():
@@ -135,9 +194,9 @@
client = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
# call your function here
- results = list_dhcp_options(client, module)
+ results, normalized_config = list_dhcp_options(client, module)
- module.exit_json(dhcp_options=results)
+ module.exit_json(dhcp_options=results, dhcp_config=normalized_config)
if __name__ == '__main__':
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -14,7 +14,6 @@
description:
- Gather information about dhcp options sets in AWS.
- This module was called C(ec2_vpc_dhcp_option_facts) before Ansible 2.9. The usage did not change.
-requirements: [ boto3 ]
author: "Nick Aslanidis (@naslanidis)"
options:
filters:
@@ -69,10 +68,68 @@
RETURN = '''
dhcp_options:
- description: The dhcp option sets for the account
+ description: The DHCP options created, associated or found
returned: always
type: list
-
+ elements: dict
+ contains:
+ dhcp_configurations:
+ description: The DHCP configuration for the option set
+ type: list
+ sample:
+ - '{"key": "ntp-servers", "values": [{"value": "10.0.0.2" , "value": "10.0.1.2"}]}'
+ - '{"key": "netbios-name-servers", "values": [{value": "10.0.0.1"}, {"value": "10.0.1.1" }]}'
+ dhcp_options_id:
+ description: The aws resource id of the primary DCHP options set created or found
+ type: str
+ sample: "dopt-0955331de6a20dd07"
+ owner_id:
+ description: The ID of the AWS account that owns the DHCP options set.
+ type: str
+ sample: 012345678912
+ tags:
+ description: The tags to be applied to a DHCP options set
+ type: list
+ sample:
+ - '{"Key": "CreatedBy", "Value": "ansible-test"}'
+ - '{"Key": "Collection", "Value": "amazon.aws"}'
+dhcp_config:
+ description: The boto2-style DHCP options created, associated or found. Provided for consistency with ec2_vpc_dhcp_option's `new_config`.
+ returned: always
+ type: list
+ contains:
+ domain-name-servers:
+ description: The IP addresses of up to four domain name servers, or AmazonProvidedDNS.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ domain-name:
+ description: The domain name for hosts in the DHCP option sets
+ returned: when available
+ type: list
+ sample:
+ - "my.example.com"
+ ntp-servers:
+ description: The IP addresses of up to four Network Time Protocol (NTP) servers.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios-name-servers:
+ description: The IP addresses of up to four NetBIOS name servers.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios-node-type:
+ description: The NetBIOS node type (1, 2, 4, or 8).
+ returned: when available
+ type: str
+ sample: 2
changed:
description: True if listing the dhcp options succeeds
type: bool
@@ -90,6 +147,7 @@
from ..module_utils.ec2 import AWSRetry
from ..module_utils.ec2 import ansible_dict_to_boto3_filter_list
from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ..module_utils.ec2 import normalize_ec2_vpc_dhcp_config
def get_dhcp_options_info(dhcp_option):
@@ -113,8 +171,9 @@
except botocore.exceptions.ClientError as e:
module.fail_json_aws(e)
- return [camel_dict_to_snake_dict(get_dhcp_options_info(option))
- for option in all_dhcp_options['DhcpOptions']]
+ normalized_config = [normalize_ec2_vpc_dhcp_config(config['DhcpConfigurations']) for config in all_dhcp_options['DhcpOptions']]
+ raw_config = [camel_dict_to_snake_dict(get_dhcp_options_info(option), ignore_list=['Tags']) for option in all_dhcp_options['DhcpOptions']]
+ return raw_config, normalized_config
def main():
@@ -135,9 +194,9 @@
client = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
# call your function here
- results = list_dhcp_options(client, module)
+ results, normalized_config = list_dhcp_options(client, module)
- module.exit_json(dhcp_options=results)
+ module.exit_json(dhcp_options=results, dhcp_config=normalized_config)
if __name__ == '__main__':
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_dhcp_option.py 2021-11-12 18:13:53.000000000 +0000
@@ -81,6 +81,12 @@
if the resource_id is provided. (options must match)
aliases: [ 'resource_tags']
type: dict
+ purge_tags:
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ default: true
+ version_added: 2.0.0
dhcp_options_id:
description:
- The resource_id of an existing DHCP options set.
@@ -98,33 +104,79 @@
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
-
-requirements:
- - boto
'''
RETURN = """
-new_options:
+changed:
+ description: Whether the dhcp options were changed
+ type: bool
+ returned: always
+dhcp_options:
description: The DHCP options created, associated or found
- returned: when appropriate
+ returned: when available
type: dict
- sample:
- domain-name-servers:
- - 10.0.0.1
- - 10.0.1.1
- netbois-name-servers:
- - 10.0.0.1
- - 10.0.1.1
- netbios-node-type: 2
- domain-name: "my.example.com"
+ contains:
+ dhcp_configurations:
+ description: The DHCP configuration for the option set
+ type: list
+ sample:
+ - '{"key": "ntp-servers", "values": [{"value": "10.0.0.2" , "value": "10.0.1.2"}]}'
+ - '{"key": "netbios-name-servers", "values": [{value": "10.0.0.1"}, {"value": "10.0.1.1" }]}'
+ dhcp_options_id:
+ description: The aws resource id of the primary DCHP options set created or found
+ type: str
+ sample: "dopt-0955331de6a20dd07"
+ owner_id:
+ description: The ID of the AWS account that owns the DHCP options set.
+ type: str
+ sample: 012345678912
+ tags:
+ description: The tags to be applied to a DHCP options set
+ type: list
+ sample:
+ - '{"Key": "CreatedBy", "Value": "ansible-test"}'
+ - '{"Key": "Collection", "Value": "amazon.aws"}'
dhcp_options_id:
description: The aws resource id of the primary DCHP options set created, found or removed
type: str
returned: when available
-changed:
- description: Whether the dhcp options were changed
- type: bool
- returned: always
+dhcp_config:
+ description: The boto2-style DHCP options created, associated or found
+ returned: when available
+ type: dict
+ contains:
+ domain-name-servers:
+ description: The IP addresses of up to four domain name servers, or AmazonProvidedDNS.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ domain-name:
+ description: The domain name for hosts in the DHCP option sets
+ returned: when available
+ type: list
+ sample:
+ - "my.example.com"
+ ntp-servers:
+ description: The IP addresses of up to four Network Time Protocol (NTP) servers.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios-name-servers:
+ description: The IP addresses of up to four NetBIOS name servers.
+ returned: when available
+ type: list
+ sample:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios-node-type:
+ description: The NetBIOS node type (1, 2, 4, or 8).
+ returned: when available
+ type: str
+ sample: 2
"""
EXAMPLES = """
@@ -190,91 +242,204 @@
"""
-import collections
-from time import sleep, time
-
try:
- import boto.vpc
- import boto.ec2
- from boto.exception import EC2ResponseError
+ import botocore
except ImportError:
- pass # Taken care of by ec2.HAS_BOTO
+ pass # Handled by AnsibleAWSModule
from ..module_utils.core import AnsibleAWSModule
-from ..module_utils.ec2 import HAS_BOTO
-from ..module_utils.ec2 import connect_to_aws
-from ..module_utils.ec2 import get_aws_connection_info
+from ..module_utils.core import is_boto3_error_code
+from ..module_utils.ec2 import AWSRetry
+from ..module_utils.ec2 import camel_dict_to_snake_dict
+from ..module_utils.ec2 import normalize_ec2_vpc_dhcp_config
+from ..module_utils.ec2 import ensure_ec2_tags
+from ..module_utils.tagging import boto3_tag_specifications
+from ..module_utils.tagging import ansible_dict_to_boto3_tag_list
+from ..module_utils.tagging import boto3_tag_list_to_ansible_dict
+
+
+def fetch_dhcp_options_for_vpc(client, module, vpc_id):
+ try:
+ vpcs = client.describe_vpcs(aws_retry=True, VpcIds=[vpc_id])['Vpcs']
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Unable to describe vpc {0}".format(vpc_id))
+ if len(vpcs) != 1:
+ return None
+ try:
+ dhcp_options = client.describe_dhcp_options(aws_retry=True, DhcpOptionsIds=[vpcs[0]['DhcpOptionsId']])
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Unable to describe dhcp option {0}".format(vpcs[0]['DhcpOptionsId']))
+
+ if len(dhcp_options['DhcpOptions']) != 1:
+ return None
+ return dhcp_options['DhcpOptions'][0]['DhcpConfigurations'], dhcp_options['DhcpOptions'][0]['DhcpOptionsId']
-def get_resource_tags(vpc_conn, resource_id):
- return dict((t.name, t.value) for t in vpc_conn.get_all_tags(filters={'resource-id': resource_id}))
+def remove_dhcp_options_by_id(client, module, dhcp_options_id):
+ changed = False
+ # First, check if this dhcp option is associated to any other vpcs
+ try:
+ associations = client.describe_vpcs(aws_retry=True, Filters=[{'Name': 'dhcp-options-id', 'Values': [dhcp_options_id]}])
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Unable to describe VPC associations for dhcp option id {0}".format(dhcp_options_id))
+ if len(associations['Vpcs']) > 0:
+ return changed
-def retry_not_found(to_call, *args, **kwargs):
- start_time = time()
- while time() < start_time + 300:
+ changed = True
+ if not module.check_mode:
try:
- return to_call(*args, **kwargs)
- except EC2ResponseError as e:
- if e.error_code in ['InvalidDhcpOptionID.NotFound', 'InvalidDhcpOptionsID.NotFound']:
- sleep(3)
- continue
- raise e
+ client.delete_dhcp_options(aws_retry=True, DhcpOptionsId=dhcp_options_id)
+ except is_boto3_error_code('InvalidDhcpOptionsID.NotFound'):
+ return False
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Unable to delete dhcp option {0}".format(dhcp_options_id))
+
+ return changed
-def ensure_tags(module, vpc_conn, resource_id, tags, add_only, check_mode):
+def match_dhcp_options(client, module, new_config):
+ """
+ Returns a DhcpOptionsId if the module parameters match; else None
+ Filter by tags, if any are specified
+ """
try:
- cur_tags = get_resource_tags(vpc_conn, resource_id)
- if tags == cur_tags:
- return {'changed': False, 'tags': cur_tags}
+ all_dhcp_options = client.describe_dhcp_options(aws_retry=True)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Unable to describe dhcp options")
+
+ for dopts in all_dhcp_options['DhcpOptions']:
+ if module.params['tags']:
+ # If we were given tags, try to match on them
+ boto_tags = ansible_dict_to_boto3_tag_list(module.params['tags'])
+ if dopts['DhcpConfigurations'] == new_config and dopts['Tags'] == boto_tags:
+ return True, dopts['DhcpOptionsId']
+ elif dopts['DhcpConfigurations'] == new_config:
+ return True, dopts['DhcpOptionsId']
- to_delete = dict((k, cur_tags[k]) for k in cur_tags if k not in tags)
- if to_delete and not add_only:
- retry_not_found(vpc_conn.delete_tags, resource_id, to_delete, dry_run=check_mode)
+ return False, None
- to_add = dict((k, tags[k]) for k in tags if k not in cur_tags)
- if to_add:
- retry_not_found(vpc_conn.create_tags, resource_id, to_add, dry_run=check_mode)
- latest_tags = get_resource_tags(vpc_conn, resource_id)
- return {'changed': True, 'tags': latest_tags}
- except EC2ResponseError as e:
- module.fail_json_aws(e, msg='Failed to modify tags')
+def create_dhcp_config(module):
+ """
+ Convert provided parameters into a DhcpConfigurations list that conforms to what the API returns:
+ https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeDhcpOptions.html
+ [{'Key': 'domain-name',
+ 'Values': [{'Value': 'us-west-2.compute.internal'}]},
+ {'Key': 'domain-name-servers',
+ 'Values': [{'Value': 'AmazonProvidedDNS'}]},
+ ...],
+ """
+ new_config = []
+ params = module.params
+ if params['domain_name'] is not None:
+ new_config.append({'Key': 'domain-name', 'Values': [{'Value': params['domain_name']}]})
+ if params['dns_servers'] is not None:
+ dns_server_list = []
+ for server in params['dns_servers']:
+ dns_server_list.append({'Value': server})
+ new_config.append({'Key': 'domain-name-servers', 'Values': dns_server_list})
+ if params['ntp_servers'] is not None:
+ ntp_server_list = []
+ for server in params['ntp_servers']:
+ ntp_server_list.append({'Value': server})
+ new_config.append({'Key': 'ntp-servers', 'Values': ntp_server_list})
+ if params['netbios_name_servers'] is not None:
+ netbios_server_list = []
+ for server in params['netbios_name_servers']:
+ netbios_server_list.append({'Value': server})
+ new_config.append({'Key': 'netbios-name-servers', 'Values': netbios_server_list})
+ if params['netbios_node_type'] is not None:
+ new_config.append({'Key': 'netbios-node-type', 'Values': params['netbios_node_type']})
+
+ return new_config
-def fetch_dhcp_options_for_vpc(vpc_conn, vpc_id):
+def create_dhcp_option_set(client, module, new_config):
"""
- Returns the DHCP options object currently associated with the requested VPC ID using the VPC
- connection variable.
+ A CreateDhcpOptions object looks different than the object we create in create_dhcp_config()
+ This is the only place we use it, so create it now
+ https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateDhcpOptions.html
+ We have to do this after inheriting any existing_config, so we need to start with the object
+ that we made in create_dhcp_config().
+ normalize_config() gives us the nicest format to work with for this.
"""
- vpcs = vpc_conn.get_all_vpcs(vpc_ids=[vpc_id])
- if len(vpcs) != 1 or vpcs[0].dhcp_options_id == "default":
- return None
- dhcp_options = vpc_conn.get_all_dhcp_options(dhcp_options_ids=[vpcs[0].dhcp_options_id])
- if len(dhcp_options) != 1:
- return None
- return dhcp_options[0]
+ changed = True
+ desired_config = normalize_ec2_vpc_dhcp_config(new_config)
+ create_config = []
+ tags_list = []
+
+ for option in ['domain-name', 'domain-name-servers', 'ntp-servers', 'netbios-name-servers']:
+ if desired_config.get(option):
+ create_config.append({'Key': option, 'Values': desired_config[option]})
+ if desired_config.get('netbios-node-type'):
+ # We need to listify this one
+ create_config.append({'Key': 'netbios-node-type', 'Values': [desired_config['netbios-node-type']]})
+
+ if module.params.get('tags'):
+ tags_list = boto3_tag_specifications(module.params['tags'], ['dhcp-options'])
+
+ try:
+ if not module.check_mode:
+ dhcp_options = client.create_dhcp_options(aws_retry=True, DhcpConfigurations=create_config, TagSpecifications=tags_list)
+ return changed, dhcp_options['DhcpOptions']['DhcpOptionsId']
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Unable to create dhcp option set")
+
+ return changed, None
+
+
+def find_opt_index(config, option):
+ return (next((i for i, item in enumerate(config) if item["Key"] == option), None))
-def match_dhcp_options(vpc_conn, tags=None, options=None):
+def inherit_dhcp_config(existing_config, new_config):
"""
- Finds a DHCP Options object that optionally matches the tags and options provided
+ Compare two DhcpConfigurations lists and apply existing options to unset parameters
+
+ If there's an existing option config and the new option is not set or it's none,
+ inherit the existing config.
+ The configs are unordered lists of dicts with non-unique keys, so we have to find
+ the right list index for a given config option first.
"""
- dhcp_options = vpc_conn.get_all_dhcp_options()
- for dopts in dhcp_options:
- if (not tags) or get_resource_tags(vpc_conn, dopts.id) == tags:
- if (not options) or dopts.options == options:
- return(True, dopts)
- return(False, None)
+ changed = False
+ for option in ['domain-name', 'domain-name-servers', 'ntp-servers',
+ 'netbios-name-servers', 'netbios-node-type']:
+ existing_index = find_opt_index(existing_config, option)
+ new_index = find_opt_index(new_config, option)
+ # `if existing_index` evaluates to False on index 0, so be very specific and verbose
+ if existing_index is not None and new_index is None:
+ new_config.append(existing_config[existing_index])
+ changed = True
+ return changed, new_config
-def remove_dhcp_options_by_id(vpc_conn, dhcp_options_id):
- associations = vpc_conn.get_all_vpcs(filters={'dhcpOptionsId': dhcp_options_id})
- if len(associations) > 0:
- return False
- else:
- vpc_conn.delete_dhcp_options(dhcp_options_id)
- return True
+
+def get_dhcp_options_info(client, module, dhcp_options_id):
+ # Return boto3-style details, consistent with the _info module
+
+ if module.check_mode and dhcp_options_id is None:
+ # We can't describe without an option id, we might get here when creating a new option set in check_mode
+ return None
+
+ try:
+ dhcp_option_info = client.describe_dhcp_options(aws_retry=True, DhcpOptionsIds=[dhcp_options_id])
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Unable to describe dhcp options")
+
+ dhcp_options_set = dhcp_option_info['DhcpOptions'][0]
+ dhcp_option_info = {'DhcpOptionsId': dhcp_options_set['DhcpOptionsId'],
+ 'DhcpConfigurations': dhcp_options_set['DhcpConfigurations'],
+ 'Tags': boto3_tag_list_to_ansible_dict(dhcp_options_set.get('Tags', [{'Value': '', 'Key': 'Name'}]))}
+ return camel_dict_to_snake_dict(dhcp_option_info, ignore_list=['Tags'])
+
+
+def associate_options(client, module, vpc_id, dhcp_options_id):
+ try:
+ if not module.check_mode:
+ client.associate_dhcp_options(aws_retry=True, DhcpOptionsId=dhcp_options_id, VpcId=vpc_id)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Unable to associate dhcp option {0} to VPC {1}".format(dhcp_options_id, vpc_id))
def main():
@@ -289,6 +454,7 @@
delete_old=dict(type='bool', default=True),
inherit_existing=dict(type='bool', default=False),
tags=dict(type='dict', default=None, aliases=['resource_tags']),
+ purge_tags=dict(default=True, type='bool'),
state=dict(type='str', default='present', choices=['present', 'absent'])
)
@@ -298,116 +464,83 @@
supports_check_mode=True
)
- params = module.params
+ vpc_id = module.params['vpc_id']
+ delete_old = module.params['delete_old']
+ inherit_existing = module.params['inherit_existing']
+ tags = module.params['tags']
+ purge_tags = module.params['purge_tags']
+ state = module.params['state']
+ dhcp_options_id = module.params['dhcp_options_id']
+
found = False
changed = False
- new_options = collections.defaultdict(lambda: None)
-
- if not HAS_BOTO:
- module.fail_json(msg='boto is required for this module')
-
- region, ec2_url, boto_params = get_aws_connection_info(module)
- connection = connect_to_aws(boto.vpc, region, **boto_params)
-
- existing_options = None
-
- # First check if we were given a dhcp_options_id
- if not params['dhcp_options_id']:
- # No, so create new_options from the parameters
- if params['dns_servers'] is not None:
- new_options['domain-name-servers'] = params['dns_servers']
- if params['netbios_name_servers'] is not None:
- new_options['netbios-name-servers'] = params['netbios_name_servers']
- if params['ntp_servers'] is not None:
- new_options['ntp-servers'] = params['ntp_servers']
- if params['domain_name'] is not None:
- # needs to be a list for comparison with boto objects later
- new_options['domain-name'] = [params['domain_name']]
- if params['netbios_node_type'] is not None:
- # needs to be a list for comparison with boto objects later
- new_options['netbios-node-type'] = [str(params['netbios_node_type'])]
- # If we were given a vpc_id then we need to look at the options on that
- if params['vpc_id']:
- existing_options = fetch_dhcp_options_for_vpc(connection, params['vpc_id'])
+ new_config = create_dhcp_config(module)
+ existing_config = None
+ existing_id = None
+
+ client = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+
+ module.deprecate("The 'new_config' return key is deprecated and will be replaced by 'dhcp_config'. Both values are returned for now.",
+ date='2022-12-01', collection_name='amazon.aws')
+ if state == 'absent':
+ if not dhcp_options_id:
+ # Look up the option id first by matching the supplied options
+ dhcp_options_id = match_dhcp_options(client, module, new_config)
+ changed = remove_dhcp_options_by_id(client, module, dhcp_options_id)
+ module.exit_json(changed=changed, new_options={}, dhcp_options={})
+
+ if not dhcp_options_id:
+ # If we were given a vpc_id then we need to look at the configuration on that
+ if vpc_id:
+ existing_config, existing_id = fetch_dhcp_options_for_vpc(client, module, vpc_id)
# if we've been asked to inherit existing options, do that now
- if params['inherit_existing']:
- if existing_options:
- for option in ['domain-name-servers', 'netbios-name-servers', 'ntp-servers', 'domain-name', 'netbios-node-type']:
- if existing_options.options.get(option) and new_options[option] != [] and (not new_options[option] or [''] == new_options[option]):
- new_options[option] = existing_options.options.get(option)
-
+ if inherit_existing and existing_config:
+ changed, new_config = inherit_dhcp_config(existing_config, new_config)
# Do the vpc's dhcp options already match what we're asked for? if so we are done
- if existing_options and new_options == existing_options.options:
- module.exit_json(changed=changed, new_options=new_options, dhcp_options_id=existing_options.id)
-
+ if existing_config:
+ if new_config == existing_config:
+ dhcp_options_id = existing_id
+ if tags or purge_tags:
+ changed |= ensure_ec2_tags(client, module, dhcp_options_id, resource_type='dhcp-options',
+ tags=tags, purge_tags=purge_tags)
+ return_config = normalize_ec2_vpc_dhcp_config(new_config)
+ results = get_dhcp_options_info(client, module, dhcp_options_id)
+ module.exit_json(changed=changed, new_options=return_config, dhcp_options_id=dhcp_options_id, dhcp_options=results)
# If no vpc_id was given, or the options don't match then look for an existing set using tags
- found, dhcp_option = match_dhcp_options(connection, params['tags'], new_options)
+ found, dhcp_options_id = match_dhcp_options(client, module, new_config)
- # Now let's cover the case where there are existing options that we were told about by id
- # If a dhcp_options_id was supplied we don't look at options inside, just set tags (if given)
else:
- supplied_options = connection.get_all_dhcp_options(filters={'dhcp-options-id': params['dhcp_options_id']})
- if len(supplied_options) != 1:
- if params['state'] != 'absent':
- module.fail_json(msg=" a dhcp_options_id was supplied, but does not exist")
- else:
+ # Now let's cover the case where there are existing options that we were told about by id
+ # If a dhcp_options_id was supplied we don't look at options inside, just set tags (if given)
+ try:
+ # Preserve the boto2 module's behaviour of checking if the option set exists first,
+ # and return the same error message if it does not
+ dhcp_options = client.describe_dhcp_options(aws_retry=True, DhcpOptionsIds=[dhcp_options_id])
+ # If that didn't fail, then we know the option ID exists
found = True
- dhcp_option = supplied_options[0]
- if params['state'] != 'absent' and params['tags']:
- ensure_tags(module, connection, dhcp_option.id, params['tags'], False, module.check_mode)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="a dhcp_options_id was supplied, but does not exist")
- # Now we have the dhcp options set, let's do the necessary
-
- # if we found options we were asked to remove then try to do so
- if params['state'] == 'absent':
- if not module.check_mode:
- if found:
- changed = remove_dhcp_options_by_id(connection, dhcp_option.id)
- module.exit_json(changed=changed, new_options={})
-
- # otherwise if we haven't found the required options we have something to do
- elif not module.check_mode and not found:
-
- # create some dhcp options if we weren't able to use existing ones
- if not found:
- # Convert netbios-node-type and domain-name back to strings
- if new_options['netbios-node-type']:
- new_options['netbios-node-type'] = new_options['netbios-node-type'][0]
- if new_options['domain-name']:
- new_options['domain-name'] = new_options['domain-name'][0]
-
- # create the new dhcp options set requested
- dhcp_option = connection.create_dhcp_options(
- new_options['domain-name'],
- new_options['domain-name-servers'],
- new_options['ntp-servers'],
- new_options['netbios-name-servers'],
- new_options['netbios-node-type'])
-
- # wait for dhcp option to be accessible
- found_dhcp_opt = False
- start_time = time()
- try:
- found_dhcp_opt = retry_not_found(connection.get_all_dhcp_options, dhcp_options_ids=[dhcp_option.id])
- except EC2ResponseError as e:
- module.fail_json_aws(e, msg="Failed to describe DHCP options")
- if not found_dhcp_opt:
- module.fail_json(msg="Failed to wait for {0} to be available.".format(dhcp_option.id))
-
- changed = True
- if params['tags']:
- ensure_tags(module, connection, dhcp_option.id, params['tags'], False, module.check_mode)
+ if not found:
+ # If we still don't have an options ID, create it
+ changed, dhcp_options_id = create_dhcp_option_set(client, module, new_config)
+ else:
+ if tags or purge_tags:
+ changed |= ensure_ec2_tags(client, module, dhcp_options_id, resource_type='dhcp-options',
+ tags=tags, purge_tags=purge_tags)
# If we were given a vpc_id, then attach the options we now have to that before we finish
- if params['vpc_id'] and not module.check_mode:
- changed = True
- connection.associate_dhcp_options(dhcp_option.id, params['vpc_id'])
- # and remove old ones if that was requested
- if params['delete_old'] and existing_options:
- remove_dhcp_options_by_id(connection, existing_options.id)
+ if vpc_id:
+ associate_options(client, module, vpc_id, dhcp_options_id)
+ changed = (changed or True)
+
+ if delete_old and existing_id:
+ remove_dhcp_options_by_id(client, module, existing_id)
- module.exit_json(changed=changed, new_options=new_options, dhcp_options_id=dhcp_option.id)
+ return_config = normalize_ec2_vpc_dhcp_config(new_config)
+ results = get_dhcp_options_info(client, module, dhcp_options_id)
+ module.exit_json(changed=changed, new_options=return_config, dhcp_options_id=dhcp_options_id, dhcp_options=results, dhcp_config=return_config)
-if __name__ == "__main__":
+if __name__ == '__main__':
main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_facts.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,210 @@
+#!/usr/bin/python
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+module: ec2_vpc_endpoint_info
+short_description: Retrieves AWS VPC endpoints details using AWS methods.
+version_added: 1.0.0
+description:
+ - Gets various details related to AWS VPC endpoints.
+ - This module was called C(ec2_vpc_endpoint_facts) before Ansible 2.9. The usage did not change.
+options:
+ query:
+ description:
+ - Defaults to C(endpoints).
+ - Specifies the query action to take.
+ - I(query=endpoints) returns information about AWS VPC endpoints.
+ - Retrieving information about services using I(query=services) has been
+ deprecated in favour of the M(amazon.aws.ec2_vpc_endpoint_service_info) module.
+ - The I(query) option has been deprecated and will be removed after 2022-12-01.
+ required: False
+ choices:
+ - services
+ - endpoints
+ type: str
+ vpc_endpoint_ids:
+ description:
+ - The IDs of specific endpoints to retrieve the details of.
+ type: list
+ elements: str
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcEndpoints.html)
+ for possible filters.
+ type: dict
+author: Karen Cheng (@Etherdaemon)
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Simple example of listing all support AWS services for VPC endpoints
+- name: List supported AWS endpoint services
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: services
+ region: ap-southeast-2
+ register: supported_endpoint_services
+
+- name: Get all endpoints in ap-southeast-2 region
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ register: existing_endpoints
+
+- name: Get all endpoints with specific filters
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ filters:
+ vpc-id:
+ - vpc-12345678
+ - vpc-87654321
+ vpc-endpoint-state:
+ - available
+ - pending
+ register: existing_endpoints
+
+- name: Get details on specific endpoint
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ vpc_endpoint_ids:
+ - vpce-12345678
+ register: endpoint_details
+'''
+
+RETURN = r'''
+service_names:
+ description: AWS VPC endpoint service names
+ returned: I(query) is C(services)
+ type: list
+ sample:
+ service_names:
+ - com.amazonaws.ap-southeast-2.s3
+vpc_endpoints:
+ description:
+ - A list of endpoints that match the query. Each endpoint has the keys creation_timestamp,
+ policy_document, route_table_ids, service_name, state, vpc_endpoint_id, vpc_id.
+ returned: I(query) is C(endpoints)
+ type: list
+ sample:
+ vpc_endpoints:
+ - creation_timestamp: "2017-02-16T11:06:48+00:00"
+ policy_document: >
+ "{\"Version\":\"2012-10-17\",\"Id\":\"Policy1450910922815\",
+ \"Statement\":[{\"Sid\":\"Stmt1450910920641\",\"Effect\":\"Allow\",
+ \"Principal\":\"*\",\"Action\":\"s3:*\",\"Resource\":[\"arn:aws:s3:::*/*\",\"arn:aws:s3:::*\"]}]}"
+ route_table_ids:
+ - rtb-abcd1234
+ service_name: "com.amazonaws.ap-southeast-2.s3"
+ state: "available"
+ vpc_endpoint_id: "vpce-abbad0d0"
+ vpc_id: "vpc-1111ffff"
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+
+
+@AWSRetry.jittered_backoff()
+def _describe_endpoints(client, **params):
+ paginator = client.get_paginator('describe_vpc_endpoints')
+ return paginator.paginate(**params).build_full_result()
+
+
+@AWSRetry.jittered_backoff()
+def _describe_endpoint_services(client, **params):
+ paginator = client.get_paginator('describe_vpc_endpoint_services')
+ return paginator.paginate(**params).build_full_result()
+
+
+def get_supported_services(client, module):
+ try:
+ services = _describe_endpoint_services(client)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to get endpoint servicess")
+
+ results = list(services['ServiceNames'])
+ return dict(service_names=results)
+
+
+def get_endpoints(client, module):
+ results = list()
+ params = dict()
+ params['Filters'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
+ if module.params.get('vpc_endpoint_ids'):
+ params['VpcEndpointIds'] = module.params.get('vpc_endpoint_ids')
+ try:
+ results = _describe_endpoints(client, **params)['VpcEndpoints']
+ results = normalize_boto3_result(results)
+ except is_boto3_error_code('InvalidVpcEndpointId.NotFound'):
+ module.exit_json(msg='VpcEndpoint {0} does not exist'.format(module.params.get('vpc_endpoint_ids')), vpc_endpoints=[])
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get endpoints")
+
+ return dict(vpc_endpoints=[camel_dict_to_snake_dict(result) for result in results])
+
+
+def main():
+ argument_spec = dict(
+ query=dict(choices=['services', 'endpoints'], required=False),
+ filters=dict(default={}, type='dict'),
+ vpc_endpoint_ids=dict(type='list', elements='str'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)
+ if module._name == 'ec2_vpc_endpoint_facts':
+ module.deprecate("The 'ec2_vpc_endpoint_facts' module has been renamed to 'ec2_vpc_endpoint_info'", date='2021-12-01', collection_name='amazon.aws')
+
+ # Validate Requirements
+ try:
+ connection = module.client('ec2')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ query = module.params.get('query')
+ if query == 'endpoints':
+ module.deprecate('The query option has been deprecated and'
+ ' will be removed after 2022-12-01. Searching for'
+ ' `endpoints` is now the default and after'
+ ' 2022-12-01 this module will only support fetching'
+ ' endpoints.',
+ date='2022-12-01', collection_name='amazon.aws')
+ elif query == 'services':
+ module.deprecate('Support for fetching service information with this '
+ 'module has been deprecated and will be removed after'
+ ' 2022-12-01. '
+ 'Please use the ec2_vpc_endpoint_service_info module '
+ 'instead.', date='2022-12-01',
+ collection_name='amazon.aws')
+ else:
+ query = 'endpoints'
+
+ invocations = {
+ 'services': get_supported_services,
+ 'endpoints': get_endpoints,
+ }
+ results = invocations[query](connection, module)
+
+ module.exit_json(**results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_info.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,210 @@
+#!/usr/bin/python
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+module: ec2_vpc_endpoint_info
+short_description: Retrieves AWS VPC endpoints details using AWS methods.
+version_added: 1.0.0
+description:
+ - Gets various details related to AWS VPC endpoints.
+ - This module was called C(ec2_vpc_endpoint_facts) before Ansible 2.9. The usage did not change.
+options:
+ query:
+ description:
+ - Defaults to C(endpoints).
+ - Specifies the query action to take.
+ - I(query=endpoints) returns information about AWS VPC endpoints.
+ - Retrieving information about services using I(query=services) has been
+ deprecated in favour of the M(amazon.aws.ec2_vpc_endpoint_service_info) module.
+ - The I(query) option has been deprecated and will be removed after 2022-12-01.
+ required: False
+ choices:
+ - services
+ - endpoints
+ type: str
+ vpc_endpoint_ids:
+ description:
+ - The IDs of specific endpoints to retrieve the details of.
+ type: list
+ elements: str
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcEndpoints.html)
+ for possible filters.
+ type: dict
+author: Karen Cheng (@Etherdaemon)
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Simple example of listing all support AWS services for VPC endpoints
+- name: List supported AWS endpoint services
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: services
+ region: ap-southeast-2
+ register: supported_endpoint_services
+
+- name: Get all endpoints in ap-southeast-2 region
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ register: existing_endpoints
+
+- name: Get all endpoints with specific filters
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ filters:
+ vpc-id:
+ - vpc-12345678
+ - vpc-87654321
+ vpc-endpoint-state:
+ - available
+ - pending
+ register: existing_endpoints
+
+- name: Get details on specific endpoint
+ amazon.aws.ec2_vpc_endpoint_info:
+ query: endpoints
+ region: ap-southeast-2
+ vpc_endpoint_ids:
+ - vpce-12345678
+ register: endpoint_details
+'''
+
+RETURN = r'''
+service_names:
+ description: AWS VPC endpoint service names
+ returned: I(query) is C(services)
+ type: list
+ sample:
+ service_names:
+ - com.amazonaws.ap-southeast-2.s3
+vpc_endpoints:
+ description:
+ - A list of endpoints that match the query. Each endpoint has the keys creation_timestamp,
+ policy_document, route_table_ids, service_name, state, vpc_endpoint_id, vpc_id.
+ returned: I(query) is C(endpoints)
+ type: list
+ sample:
+ vpc_endpoints:
+ - creation_timestamp: "2017-02-16T11:06:48+00:00"
+ policy_document: >
+ "{\"Version\":\"2012-10-17\",\"Id\":\"Policy1450910922815\",
+ \"Statement\":[{\"Sid\":\"Stmt1450910920641\",\"Effect\":\"Allow\",
+ \"Principal\":\"*\",\"Action\":\"s3:*\",\"Resource\":[\"arn:aws:s3:::*/*\",\"arn:aws:s3:::*\"]}]}"
+ route_table_ids:
+ - rtb-abcd1234
+ service_name: "com.amazonaws.ap-southeast-2.s3"
+ state: "available"
+ vpc_endpoint_id: "vpce-abbad0d0"
+ vpc_id: "vpc-1111ffff"
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+
+
+@AWSRetry.jittered_backoff()
+def _describe_endpoints(client, **params):
+ paginator = client.get_paginator('describe_vpc_endpoints')
+ return paginator.paginate(**params).build_full_result()
+
+
+@AWSRetry.jittered_backoff()
+def _describe_endpoint_services(client, **params):
+ paginator = client.get_paginator('describe_vpc_endpoint_services')
+ return paginator.paginate(**params).build_full_result()
+
+
+def get_supported_services(client, module):
+ try:
+ services = _describe_endpoint_services(client)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to get endpoint servicess")
+
+ results = list(services['ServiceNames'])
+ return dict(service_names=results)
+
+
+def get_endpoints(client, module):
+ results = list()
+ params = dict()
+ params['Filters'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
+ if module.params.get('vpc_endpoint_ids'):
+ params['VpcEndpointIds'] = module.params.get('vpc_endpoint_ids')
+ try:
+ results = _describe_endpoints(client, **params)['VpcEndpoints']
+ results = normalize_boto3_result(results)
+ except is_boto3_error_code('InvalidVpcEndpointId.NotFound'):
+ module.exit_json(msg='VpcEndpoint {0} does not exist'.format(module.params.get('vpc_endpoint_ids')), vpc_endpoints=[])
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get endpoints")
+
+ return dict(vpc_endpoints=[camel_dict_to_snake_dict(result) for result in results])
+
+
+def main():
+ argument_spec = dict(
+ query=dict(choices=['services', 'endpoints'], required=False),
+ filters=dict(default={}, type='dict'),
+ vpc_endpoint_ids=dict(type='list', elements='str'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)
+ if module._name == 'ec2_vpc_endpoint_facts':
+ module.deprecate("The 'ec2_vpc_endpoint_facts' module has been renamed to 'ec2_vpc_endpoint_info'", date='2021-12-01', collection_name='amazon.aws')
+
+ # Validate Requirements
+ try:
+ connection = module.client('ec2')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ query = module.params.get('query')
+ if query == 'endpoints':
+ module.deprecate('The query option has been deprecated and'
+ ' will be removed after 2022-12-01. Searching for'
+ ' `endpoints` is now the default and after'
+ ' 2022-12-01 this module will only support fetching'
+ ' endpoints.',
+ date='2022-12-01', collection_name='amazon.aws')
+ elif query == 'services':
+ module.deprecate('Support for fetching service information with this '
+ 'module has been deprecated and will be removed after'
+ ' 2022-12-01. '
+ 'Please use the ec2_vpc_endpoint_service_info module '
+ 'instead.', date='2022-12-01',
+ collection_name='amazon.aws')
+ else:
+ query = 'endpoints'
+
+ invocations = {
+ 'services': get_supported_services,
+ 'endpoints': get_endpoints,
+ }
+ results = invocations[query](connection, module)
+
+ module.exit_json(**results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,485 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+module: ec2_vpc_endpoint
+short_description: Create and delete AWS VPC Endpoints.
+version_added: 1.0.0
+description:
+ - Creates AWS VPC endpoints.
+ - Deletes AWS VPC endpoints.
+ - This module supports check mode.
+options:
+ vpc_id:
+ description:
+ - Required when creating a VPC endpoint.
+ required: false
+ type: str
+ vpc_endpoint_type:
+ description:
+ - The type of endpoint.
+ required: false
+ default: Gateway
+ choices: [ "Interface", "Gateway", "GatewayLoadBalancer" ]
+ type: str
+ version_added: 1.5.0
+ vpc_endpoint_subnets:
+ description:
+ - The list of subnets to attach to the endpoint.
+ - Requires I(vpc_endpoint_type=GatewayLoadBalancer) or I(vpc_endpoint_type=Interface).
+ required: false
+ type: list
+ elements: str
+ version_added: 2.1.0
+ vpc_endpoint_security_groups:
+ description:
+ - The list of security groups to attach to the endpoint.
+ - Requires I(vpc_endpoint_type=GatewayLoadBalancer) or I(vpc_endpoint_type=Interface).
+ required: false
+ type: list
+ elements: str
+ version_added: 2.1.0
+ service:
+ description:
+ - An AWS supported vpc endpoint service. Use the M(amazon.aws.ec2_vpc_endpoint_info)
+ module to describe the supported endpoint services.
+ - Required when creating an endpoint.
+ required: false
+ type: str
+ policy:
+ description:
+ - A properly formatted json policy as string, see
+ U(https://github.com/ansible/ansible/issues/7005#issuecomment-42894813).
+ Cannot be used with I(policy_file).
+ - Option when creating an endpoint. If not provided AWS will
+ utilise a default policy which provides full access to the service.
+ required: false
+ type: json
+ policy_file:
+ description:
+ - The path to the properly json formatted policy file, see
+ U(https://github.com/ansible/ansible/issues/7005#issuecomment-42894813)
+ on how to use it properly. Cannot be used with I(policy).
+ - Option when creating an endpoint. If not provided AWS will
+ utilise a default policy which provides full access to the service.
+ - This option has been deprecated and will be removed after 2022-12-01
+ to maintain the existing functionality please use the I(policy) option
+ and a file lookup.
+ required: false
+ aliases: [ "policy_path" ]
+ type: path
+ state:
+ description:
+ - present to ensure resource is created.
+ - absent to remove resource
+ required: false
+ default: present
+ choices: [ "present", "absent" ]
+ type: str
+ tags:
+ description:
+ - A dict of tags to apply to the internet gateway.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ type: dict
+ version_added: 1.5.0
+ purge_tags:
+ description:
+ - Delete any tags not specified in the task that are on the instance.
+ This means you have to specify all the desired tags on each task affecting an instance.
+ default: false
+ type: bool
+ version_added: 1.5.0
+ wait:
+ description:
+ - When specified, will wait for either available status for state present.
+ Unfortunately this is ignored for delete actions due to a difference in
+ behaviour from AWS.
+ required: false
+ default: no
+ type: bool
+ wait_timeout:
+ description:
+ - Used in conjunction with wait. Number of seconds to wait for status.
+ Unfortunately this is ignored for delete actions due to a difference in
+ behaviour from AWS.
+ required: false
+ default: 320
+ type: int
+ route_table_ids:
+ description:
+ - List of one or more route table ids to attach to the endpoint. A route
+ is added to the route table with the destination of the endpoint if
+ provided.
+ required: false
+ type: list
+ elements: str
+ vpc_endpoint_id:
+ description:
+ - One or more vpc endpoint ids to remove from the AWS account
+ required: false
+ type: str
+ client_token:
+ description:
+ - Optional client token to ensure idempotency
+ required: false
+ type: str
+author: Karen Cheng (@Etherdaemon)
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Create new vpc endpoint with a json template for policy
+ amazon.aws.ec2_vpc_endpoint:
+ state: present
+ region: ap-southeast-2
+ vpc_id: vpc-12345678
+ service: com.amazonaws.ap-southeast-2.s3
+ policy: " {{ lookup( 'template', 'endpoint_policy.json.j2') }} "
+ route_table_ids:
+ - rtb-12345678
+ - rtb-87654321
+ register: new_vpc_endpoint
+
+- name: Create new vpc endpoint with the default policy
+ amazon.aws.ec2_vpc_endpoint:
+ state: present
+ region: ap-southeast-2
+ vpc_id: vpc-12345678
+ service: com.amazonaws.ap-southeast-2.s3
+ route_table_ids:
+ - rtb-12345678
+ - rtb-87654321
+ register: new_vpc_endpoint
+
+- name: Create new vpc endpoint with json file
+ amazon.aws.ec2_vpc_endpoint:
+ state: present
+ region: ap-southeast-2
+ vpc_id: vpc-12345678
+ service: com.amazonaws.ap-southeast-2.s3
+ policy_file: "{{ role_path }}/files/endpoint_policy.json"
+ route_table_ids:
+ - rtb-12345678
+ - rtb-87654321
+ register: new_vpc_endpoint
+
+- name: Delete newly created vpc endpoint
+ amazon.aws.ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: "{{ new_vpc_endpoint.result['VpcEndpointId'] }}"
+ region: ap-southeast-2
+'''
+
+RETURN = r'''
+endpoints:
+ description: The resulting endpoints from the module call
+ returned: success
+ type: list
+ sample: [
+ {
+ "creation_timestamp": "2017-02-20T05:04:15+00:00",
+ "policy_document": {
+ "Id": "Policy1450910922815",
+ "Statement": [
+ {
+ "Action": "s3:*",
+ "Effect": "Allow",
+ "Principal": "*",
+ "Resource": [
+ "arn:aws:s3:::*/*",
+ "arn:aws:s3:::*"
+ ],
+ "Sid": "Stmt1450910920641"
+ }
+ ],
+ "Version": "2012-10-17"
+ },
+ "route_table_ids": [
+ "rtb-abcd1234"
+ ],
+ "service_name": "com.amazonaws.ap-southeast-2.s3",
+ "vpc_endpoint_id": "vpce-a1b2c3d4",
+ "vpc_id": "vpc-abbad0d0"
+ }
+ ]
+'''
+
+import datetime
+import json
+import traceback
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible.module_utils.six import string_types
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.waiters import get_waiter
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ensure_ec2_tags
+from ansible_collections.amazon.aws.plugins.module_utils.tagging import boto3_tag_specifications
+
+
+def get_endpoints(client, module, endpoint_id=None):
+ params = dict()
+ if endpoint_id:
+ params['VpcEndpointIds'] = [endpoint_id]
+ else:
+ filters = list()
+ if module.params.get('service'):
+ filters.append({'Name': 'service-name', 'Values': [module.params.get('service')]})
+ if module.params.get('vpc_id'):
+ filters.append({'Name': 'vpc-id', 'Values': [module.params.get('vpc_id')]})
+ params['Filters'] = filters
+ try:
+ result = client.describe_vpc_endpoints(aws_retry=True, **params)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to get endpoints")
+
+ # normalize iso datetime fields in result
+ normalized_result = normalize_boto3_result(result)
+ return normalized_result
+
+
+def match_endpoints(route_table_ids, service_name, vpc_id, endpoint):
+ found = False
+ sorted_route_table_ids = []
+
+ if route_table_ids:
+ sorted_route_table_ids = sorted(route_table_ids)
+
+ if endpoint['VpcId'] == vpc_id and endpoint['ServiceName'] == service_name:
+ sorted_endpoint_rt_ids = sorted(endpoint['RouteTableIds'])
+ if sorted_endpoint_rt_ids == sorted_route_table_ids:
+ found = True
+ return found
+
+
+def setup_creation(client, module):
+ endpoint_id = module.params.get('vpc_endpoint_id')
+ route_table_ids = module.params.get('route_table_ids')
+ service_name = module.params.get('service')
+ vpc_id = module.params.get('vpc_id')
+ changed = False
+
+ if not endpoint_id:
+ # Try to use the module parameters to match any existing endpoints
+ all_endpoints = get_endpoints(client, module, endpoint_id)
+ if len(all_endpoints['VpcEndpoints']) > 0:
+ for endpoint in all_endpoints['VpcEndpoints']:
+ if match_endpoints(route_table_ids, service_name, vpc_id, endpoint):
+ endpoint_id = endpoint['VpcEndpointId']
+ break
+
+ if endpoint_id:
+ # If we have an endpoint now, just ensure tags and exit
+ if module.params.get('tags'):
+ changed |= ensure_ec2_tags(client, module, endpoint_id,
+ resource_type='vpc-endpoint',
+ tags=module.params.get('tags'),
+ purge_tags=module.params.get('purge_tags'))
+ normalized_result = get_endpoints(client, module, endpoint_id=endpoint_id)['VpcEndpoints'][0]
+ return changed, camel_dict_to_snake_dict(normalized_result, ignore_list=['Tags'])
+
+ changed, result = create_vpc_endpoint(client, module)
+
+ return changed, camel_dict_to_snake_dict(result, ignore_list=['Tags'])
+
+
+def create_vpc_endpoint(client, module):
+ params = dict()
+ changed = False
+ token_provided = False
+ params['VpcId'] = module.params.get('vpc_id')
+ params['VpcEndpointType'] = module.params.get('vpc_endpoint_type')
+ params['ServiceName'] = module.params.get('service')
+
+ if module.check_mode:
+ changed = True
+ result = 'Would have created VPC Endpoint if not in check mode'
+ module.exit_json(changed=changed, result=result)
+
+ if module.params.get('route_table_ids'):
+ params['RouteTableIds'] = module.params.get('route_table_ids')
+
+ if module.params.get('vpc_endpoint_subnets'):
+ params['SubnetIds'] = module.params.get('vpc_endpoint_subnets')
+
+ if module.params.get('vpc_endpoint_security_groups'):
+ params['SecurityGroupIds'] = module.params.get('vpc_endpoint_security_groups')
+
+ if module.params.get('client_token'):
+ token_provided = True
+ request_time = datetime.datetime.utcnow()
+ params['ClientToken'] = module.params.get('client_token')
+
+ policy = None
+ if module.params.get('policy'):
+ try:
+ policy = json.loads(module.params.get('policy'))
+ except ValueError as e:
+ module.fail_json(msg=str(e), exception=traceback.format_exc(),
+ **camel_dict_to_snake_dict(e.response))
+
+ elif module.params.get('policy_file'):
+ try:
+ with open(module.params.get('policy_file'), 'r') as json_data:
+ policy = json.load(json_data)
+ except Exception as e:
+ module.fail_json(msg=str(e), exception=traceback.format_exc(),
+ **camel_dict_to_snake_dict(e.response))
+
+ if policy:
+ params['PolicyDocument'] = json.dumps(policy)
+
+ if module.params.get('tags'):
+ params["TagSpecifications"] = boto3_tag_specifications(module.params.get('tags'), ['vpc-endpoint'])
+
+ try:
+ changed = True
+ result = client.create_vpc_endpoint(aws_retry=True, **params)['VpcEndpoint']
+ if token_provided and (request_time > result['creation_timestamp'].replace(tzinfo=None)):
+ changed = False
+ elif module.params.get('wait') and not module.check_mode:
+ try:
+ waiter = get_waiter(client, 'vpc_endpoint_exists')
+ waiter.wait(VpcEndpointIds=[result['VpcEndpointId']], WaiterConfig=dict(Delay=15, MaxAttempts=module.params.get('wait_timeout') // 15))
+ except botocore.exceptions.WaiterError as e:
+ module.fail_json_aws(msg='Error waiting for vpc endpoint to become available - please check the AWS console')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg='Failure while waiting for status')
+
+ except is_boto3_error_code('IdempotentParameterMismatch'): # pylint: disable=duplicate-except
+ module.fail_json(msg="IdempotentParameterMismatch - updates of endpoints are not allowed by the API")
+ except is_boto3_error_code('RouteAlreadyExists'): # pylint: disable=duplicate-except
+ module.fail_json(msg="RouteAlreadyExists for one of the route tables - update is not allowed by the API")
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to create VPC.")
+
+ # describe and normalize iso datetime fields in result after adding tags
+ normalized_result = get_endpoints(client, module, endpoint_id=result['VpcEndpointId'])['VpcEndpoints'][0]
+ return changed, normalized_result
+
+
+def setup_removal(client, module):
+ params = dict()
+ changed = False
+
+ if module.check_mode:
+ try:
+ exists = client.describe_vpc_endpoints(aws_retry=True, VpcEndpointIds=[module.params.get('vpc_endpoint_id')])
+ if exists:
+ result = {'msg': 'Would have deleted VPC Endpoint if not in check mode'}
+ changed = True
+ except is_boto3_error_code('InvalidVpcEndpointId.NotFound'):
+ result = {'msg': 'Endpoint does not exist, nothing to delete.'}
+ changed = False
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get endpoints")
+
+ return changed, result
+
+ if isinstance(module.params.get('vpc_endpoint_id'), string_types):
+ params['VpcEndpointIds'] = [module.params.get('vpc_endpoint_id')]
+ else:
+ params['VpcEndpointIds'] = module.params.get('vpc_endpoint_id')
+ try:
+ result = client.delete_vpc_endpoints(aws_retry=True, **params)['Unsuccessful']
+ if len(result) < len(params['VpcEndpointIds']):
+ changed = True
+ # For some reason delete_vpc_endpoints doesn't throw exceptions it
+ # returns a list of failed 'results' instead. Throw these so we can
+ # catch them the way we expect
+ for r in result:
+ try:
+ raise botocore.exceptions.ClientError(r, 'delete_vpc_endpoints')
+ except is_boto3_error_code('InvalidVpcEndpoint.NotFound'):
+ continue
+
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, "Failed to delete VPC endpoint")
+ return changed, result
+
+
+def main():
+ argument_spec = dict(
+ vpc_id=dict(),
+ vpc_endpoint_type=dict(default='Gateway', choices=['Interface', 'Gateway', 'GatewayLoadBalancer']),
+ vpc_endpoint_security_groups=dict(type='list', elements='str'),
+ vpc_endpoint_subnets=dict(type='list', elements='str'),
+ service=dict(),
+ policy=dict(type='json'),
+ policy_file=dict(type='path', aliases=['policy_path']),
+ state=dict(default='present', choices=['present', 'absent']),
+ wait=dict(type='bool', default=False),
+ wait_timeout=dict(type='int', default=320, required=False),
+ route_table_ids=dict(type='list', elements='str'),
+ vpc_endpoint_id=dict(),
+ client_token=dict(no_log=False),
+ tags=dict(type='dict'),
+ purge_tags=dict(type='bool', default=False),
+ )
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ supports_check_mode=True,
+ mutually_exclusive=[['policy', 'policy_file']],
+ required_if=[
+ ['state', 'present', ['vpc_id', 'service']],
+ ['state', 'absent', ['vpc_endpoint_id']],
+ ],
+ )
+
+ # Validate Requirements
+ state = module.params.get('state')
+
+ if module.params.get('policy_file'):
+ module.deprecate('The policy_file option has been deprecated and'
+ ' will be removed after 2022-12-01',
+ date='2022-12-01', collection_name='amazon.aws')
+
+ if module.params.get('vpc_endpoint_type'):
+ if module.params.get('vpc_endpoint_type') == 'Gateway':
+ if module.params.get('vpc_endpoint_subnets') or module.params.get('vpc_endpoint_security_groups'):
+ module.fail_json(msg="Parameter vpc_endpoint_subnets and/or vpc_endpoint_security_groups can't be used with Gateway endpoint type")
+
+ if module.params.get('vpc_endpoint_type') == 'GatewayLoadBalancer':
+ if module.params.get('vpc_endpoint_security_groups'):
+ module.fail_json(msg="Parameter vpc_endpoint_security_groups can't be used with GatewayLoadBalancer endpoint type")
+
+ if module.params.get('vpc_endpoint_type') == 'Interface':
+ if module.params.get('vpc_endpoint_subnets') and not module.params.get('vpc_endpoint_security_groups'):
+ module.fail_json(msg="Parameter vpc_endpoint_security_groups must be set when endpoint type is Interface and vpc_endpoint_subnets is defined")
+ if not module.params.get('vpc_endpoint_subnets') and module.params.get('vpc_endpoint_security_groups'):
+ module.fail_json(msg="Parameter vpc_endpoint_subnets must be set when endpoint type is Interface and vpc_endpoint_security_groups is defined")
+
+ try:
+ ec2 = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ # Ensure resource is present
+ if state == 'present':
+ (changed, results) = setup_creation(ec2, module)
+ else:
+ (changed, results) = setup_removal(ec2, module)
+
+ module.exit_json(changed=changed, result=results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_service_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_service_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_service_info.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_endpoint_service_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,179 @@
+#!/usr/bin/python
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+module: ec2_vpc_endpoint_service_info
+short_description: retrieves AWS VPC endpoint service details
+version_added: 1.5.0
+description:
+ - Gets details related to AWS VPC Endpoint Services.
+options:
+ filters:
+ description:
+ - A dict of filters to apply.
+ - Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeVpcEndpointServices.html)
+ for possible filters.
+ type: dict
+ service_names:
+ description:
+ - A list of service names which can be used to narrow the search results.
+ type: list
+ elements: str
+author:
+ - Mark Chappell (@tremble)
+extends_documentation_fragment:
+ - amazon.aws.aws
+ - amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Simple example of listing all supported AWS services for VPC endpoints
+- name: List supported AWS endpoint services
+ amazon.aws.ec2_vpc_endpoint_service_info:
+ region: ap-southeast-2
+ register: supported_endpoint_services
+'''
+
+RETURN = r'''
+service_names:
+ description: List of supported AWS VPC endpoint service names.
+ returned: success
+ type: list
+ sample:
+ service_names:
+ - com.amazonaws.ap-southeast-2.s3
+service_details:
+ description: Detailed information about the AWS VPC endpoint services.
+ returned: success
+ type: complex
+ contains:
+ service_name:
+ returned: success
+ description: The ARN of the endpoint service.
+ type: str
+ service_id:
+ returned: success
+ description: The ID of the endpoint service.
+ type: str
+ service_type:
+ returned: success
+ description: The type of the service
+ type: list
+ availability_zones:
+ returned: success
+ description: The Availability Zones in which the service is available.
+ type: list
+ owner:
+ returned: success
+ description: The AWS account ID of the service owner.
+ type: str
+ base_endpoint_dns_names:
+ returned: success
+ description: The DNS names for the service.
+ type: list
+ private_dns_name:
+ returned: success
+ description: The private DNS name for the service.
+ type: str
+ private_dns_names:
+ returned: success
+ description: The private DNS names assigned to the VPC endpoint service.
+ type: list
+ vpc_endpoint_policy_supported:
+ returned: success
+ description: Whether the service supports endpoint policies.
+ type: bool
+ acceptance_required:
+ returned: success
+ description:
+ Whether VPC endpoint connection requests to the service must be
+ accepted by the service owner.
+ type: bool
+ manages_vpc_endpoints:
+ returned: success
+ description: Whether the service manages its VPC endpoints.
+ type: bool
+ tags:
+ returned: success
+ description: A dict of tags associated with the service
+ type: dict
+ private_dns_name_verification_state:
+ returned: success
+ description:
+ - The verification state of the VPC endpoint service.
+ - Consumers of an endpoint service cannot use the private name when the state is not C(verified).
+ type: str
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+
+
+# We're using a paginator so we can't use the client decorators
+@AWSRetry.jittered_backoff()
+def get_services(client, module):
+ paginator = client.get_paginator('describe_vpc_endpoint_services')
+ params = {}
+ if module.params.get("filters"):
+ params['Filters'] = ansible_dict_to_boto3_filter_list(module.params.get("filters"))
+
+ if module.params.get("service_names"):
+ params['ServiceNames'] = module.params.get("service_names")
+
+ results = paginator.paginate(**params).build_full_result()
+ return results
+
+
+def normalize_service(service):
+ normalized = camel_dict_to_snake_dict(service, ignore_list=['Tags'])
+ normalized["tags"] = boto3_tag_list_to_ansible_dict(service.get('Tags'))
+ return normalized
+
+
+def normalize_result(result):
+ normalized = {}
+ normalized['service_details'] = [normalize_service(service) for service in result.get('ServiceDetails')]
+ normalized['service_names'] = result.get('ServiceNames', [])
+ return normalized
+
+
+def main():
+ argument_spec = dict(
+ filters=dict(default={}, type='dict'),
+ service_names=dict(type='list', elements='str'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)
+
+ # Validate Requirements
+ try:
+ client = module.client('ec2')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ try:
+ results = get_services(client, module)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to retrieve service details')
+ normalized_result = normalize_result(results)
+
+ module.exit_json(changed=False, **normalized_result)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_facts.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,184 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_vpc_igw_info
+version_added: 1.0.0
+short_description: Gather information about internet gateways in AWS
+description:
+ - Gather information about internet gateways in AWS.
+ - This module was called C(ec2_vpc_igw_facts) before Ansible 2.9. The usage did not change.
+author: "Nick Aslanidis (@naslanidis)"
+options:
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInternetGateways.html) for possible filters.
+ type: dict
+ internet_gateway_ids:
+ description:
+ - Get details of specific Internet Gateway ID. Provide this value as a list.
+ type: list
+ elements: str
+ convert_tags:
+ description:
+ - Convert tags from boto3 format (list of dictionaries) to the standard dictionary format.
+ - This currently defaults to C(False). The default will be changed to C(True) after 2022-06-22.
+ type: bool
+ version_added: 1.3.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Gather information about all Internet Gateways for an account or profile
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ register: igw_info
+
+- name: Gather information about a filtered list of Internet Gateways
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ filters:
+ "tag:Name": "igw-123"
+ register: igw_info
+
+- name: Gather information about a specific internet gateway by InternetGatewayId
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ internet_gateway_ids: igw-c1231234
+ register: igw_info
+'''
+
+RETURN = r'''
+changed:
+ description: True if listing the internet gateways succeeds.
+ type: bool
+ returned: always
+ sample: "false"
+internet_gateways:
+ description: The internet gateways for the account.
+ returned: always
+ type: complex
+ contains:
+ attachments:
+ description: Any VPCs attached to the internet gateway
+ returned: I(state=present)
+ type: complex
+ contains:
+ state:
+ description: The current state of the attachment
+ returned: I(state=present)
+ type: str
+ sample: available
+ vpc_id:
+ description: The ID of the VPC.
+ returned: I(state=present)
+ type: str
+ sample: vpc-02123b67
+ internet_gateway_id:
+ description: The ID of the internet gateway
+ returned: I(state=present)
+ type: str
+ sample: igw-2123634d
+ tags:
+ description: Any tags assigned to the internet gateway
+ returned: I(state=present)
+ type: dict
+ sample:
+ tags:
+ "Ansible": "Test"
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import camel_dict_to_snake_dict
+
+
+def get_internet_gateway_info(internet_gateway, convert_tags):
+ if convert_tags:
+ tags = boto3_tag_list_to_ansible_dict(internet_gateway['Tags'])
+ ignore_list = ["Tags"]
+ else:
+ tags = internet_gateway['Tags']
+ ignore_list = []
+ internet_gateway_info = {'InternetGatewayId': internet_gateway['InternetGatewayId'],
+ 'Attachments': internet_gateway['Attachments'],
+ 'Tags': tags}
+
+ internet_gateway_info = camel_dict_to_snake_dict(internet_gateway_info, ignore_list=ignore_list)
+ return internet_gateway_info
+
+
+def list_internet_gateways(connection, module):
+ params = dict()
+
+ params['Filters'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
+ convert_tags = module.params.get('convert_tags')
+
+ if module.params.get("internet_gateway_ids"):
+ params['InternetGatewayIds'] = module.params.get("internet_gateway_ids")
+
+ try:
+ all_internet_gateways = connection.describe_internet_gateways(aws_retry=True, **params)
+ except is_boto3_error_code('InvalidInternetGatewayID.NotFound'):
+ module.fail_json('InternetGateway not found')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, 'Unable to describe internet gateways')
+
+ return [get_internet_gateway_info(igw, convert_tags)
+ for igw in all_internet_gateways['InternetGateways']]
+
+
+def main():
+ argument_spec = dict(
+ filters=dict(type='dict', default=dict()),
+ internet_gateway_ids=dict(type='list', default=None, elements='str'),
+ convert_tags=dict(type='bool'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)
+ if module._name == 'ec2_vpc_igw_facts':
+ module.deprecate("The 'ec2_vpc_igw_facts' module has been renamed to 'ec2_vpc_igw_info'", date='2021-12-01', collection_name='amazon.aws')
+
+ if module.params.get('convert_tags') is None:
+ module.deprecate('This module currently returns boto3 style tags by default. '
+ 'This default has been deprecated and the module will return a simple dictionary in future. '
+ 'This behaviour can be controlled through the convert_tags parameter.',
+ date='2021-12-01', collection_name='amazon.aws')
+
+ # Validate Requirements
+ try:
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ # call your function here
+ results = list_internet_gateways(connection, module)
+
+ module.exit_json(internet_gateways=results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_info.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,184 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_vpc_igw_info
+version_added: 1.0.0
+short_description: Gather information about internet gateways in AWS
+description:
+ - Gather information about internet gateways in AWS.
+ - This module was called C(ec2_vpc_igw_facts) before Ansible 2.9. The usage did not change.
+author: "Nick Aslanidis (@naslanidis)"
+options:
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInternetGateways.html) for possible filters.
+ type: dict
+ internet_gateway_ids:
+ description:
+ - Get details of specific Internet Gateway ID. Provide this value as a list.
+ type: list
+ elements: str
+ convert_tags:
+ description:
+ - Convert tags from boto3 format (list of dictionaries) to the standard dictionary format.
+ - This currently defaults to C(False). The default will be changed to C(True) after 2022-06-22.
+ type: bool
+ version_added: 1.3.0
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# # Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Gather information about all Internet Gateways for an account or profile
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ register: igw_info
+
+- name: Gather information about a filtered list of Internet Gateways
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ filters:
+ "tag:Name": "igw-123"
+ register: igw_info
+
+- name: Gather information about a specific internet gateway by InternetGatewayId
+ amazon.aws.ec2_vpc_igw_info:
+ region: ap-southeast-2
+ profile: production
+ internet_gateway_ids: igw-c1231234
+ register: igw_info
+'''
+
+RETURN = r'''
+changed:
+ description: True if listing the internet gateways succeeds.
+ type: bool
+ returned: always
+ sample: "false"
+internet_gateways:
+ description: The internet gateways for the account.
+ returned: always
+ type: complex
+ contains:
+ attachments:
+ description: Any VPCs attached to the internet gateway
+ returned: I(state=present)
+ type: complex
+ contains:
+ state:
+ description: The current state of the attachment
+ returned: I(state=present)
+ type: str
+ sample: available
+ vpc_id:
+ description: The ID of the VPC.
+ returned: I(state=present)
+ type: str
+ sample: vpc-02123b67
+ internet_gateway_id:
+ description: The ID of the internet gateway
+ returned: I(state=present)
+ type: str
+ sample: igw-2123634d
+ tags:
+ description: Any tags assigned to the internet gateway
+ returned: I(state=present)
+ type: dict
+ sample:
+ tags:
+ "Ansible": "Test"
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import camel_dict_to_snake_dict
+
+
+def get_internet_gateway_info(internet_gateway, convert_tags):
+ if convert_tags:
+ tags = boto3_tag_list_to_ansible_dict(internet_gateway['Tags'])
+ ignore_list = ["Tags"]
+ else:
+ tags = internet_gateway['Tags']
+ ignore_list = []
+ internet_gateway_info = {'InternetGatewayId': internet_gateway['InternetGatewayId'],
+ 'Attachments': internet_gateway['Attachments'],
+ 'Tags': tags}
+
+ internet_gateway_info = camel_dict_to_snake_dict(internet_gateway_info, ignore_list=ignore_list)
+ return internet_gateway_info
+
+
+def list_internet_gateways(connection, module):
+ params = dict()
+
+ params['Filters'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
+ convert_tags = module.params.get('convert_tags')
+
+ if module.params.get("internet_gateway_ids"):
+ params['InternetGatewayIds'] = module.params.get("internet_gateway_ids")
+
+ try:
+ all_internet_gateways = connection.describe_internet_gateways(aws_retry=True, **params)
+ except is_boto3_error_code('InvalidInternetGatewayID.NotFound'):
+ module.fail_json('InternetGateway not found')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, 'Unable to describe internet gateways')
+
+ return [get_internet_gateway_info(igw, convert_tags)
+ for igw in all_internet_gateways['InternetGateways']]
+
+
+def main():
+ argument_spec = dict(
+ filters=dict(type='dict', default=dict()),
+ internet_gateway_ids=dict(type='list', default=None, elements='str'),
+ convert_tags=dict(type='bool'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)
+ if module._name == 'ec2_vpc_igw_facts':
+ module.deprecate("The 'ec2_vpc_igw_facts' module has been renamed to 'ec2_vpc_igw_info'", date='2021-12-01', collection_name='amazon.aws')
+
+ if module.params.get('convert_tags') is None:
+ module.deprecate('This module currently returns boto3 style tags by default. '
+ 'This default has been deprecated and the module will return a simple dictionary in future. '
+ 'This behaviour can be controlled through the convert_tags parameter.',
+ date='2021-12-01', collection_name='amazon.aws')
+
+ # Validate Requirements
+ try:
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ # call your function here
+ results = list_internet_gateways(connection, module)
+
+ module.exit_json(internet_gateways=results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_igw.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,248 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = '''
+---
+module: ec2_vpc_igw
+version_added: 1.0.0
+short_description: Manage an AWS VPC Internet gateway
+description:
+ - Manage an AWS VPC Internet gateway
+author: Robert Estelle (@erydo)
+options:
+ vpc_id:
+ description:
+ - The VPC ID for the VPC in which to manage the Internet Gateway.
+ required: true
+ type: str
+ tags:
+ description:
+ - A dict of tags to apply to the internet gateway.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ aliases: [ 'resource_tags' ]
+ type: dict
+ purge_tags:
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ default: true
+ version_added: 1.3.0
+ state:
+ description:
+ - Create or terminate the IGW
+ default: present
+ choices: [ 'present', 'absent' ]
+ type: str
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+'''
+
+EXAMPLES = '''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+# Ensure that the VPC has an Internet Gateway.
+# The Internet Gateway ID is can be accessed via {{igw.gateway_id}} for use in setting up NATs etc.
+- name: Create Internet gateway
+ amazon.aws.ec2_vpc_igw:
+ vpc_id: vpc-abcdefgh
+ state: present
+ register: igw
+
+- name: Create Internet gateway with tags
+ amazon.aws.ec2_vpc_igw:
+ vpc_id: vpc-abcdefgh
+ state: present
+ tags:
+ Tag1: tag1
+ Tag2: tag2
+ register: igw
+
+- name: Delete Internet gateway
+ amazon.aws.ec2_vpc_igw:
+ state: absent
+ vpc_id: vpc-abcdefgh
+ register: vpc_igw_delete
+'''
+
+RETURN = '''
+changed:
+ description: If any changes have been made to the Internet Gateway.
+ type: bool
+ returned: always
+ sample:
+ changed: false
+gateway_id:
+ description: The unique identifier for the Internet Gateway.
+ type: str
+ returned: I(state=present)
+ sample:
+ gateway_id: "igw-XXXXXXXX"
+tags:
+ description: The tags associated the Internet Gateway.
+ type: dict
+ returned: I(state=present)
+ sample:
+ tags:
+ "Ansible": "Test"
+vpc_id:
+ description: The VPC ID associated with the Internet Gateway.
+ type: str
+ returned: I(state=present)
+ sample:
+ vpc_id: "vpc-XXXXXXXX"
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # caught by AnsibleAWSModule
+
+from ..module_utils.core import AnsibleAWSModule
+from ..module_utils.waiters import get_waiter
+from ..module_utils.ec2 import AWSRetry
+from ..module_utils.ec2 import camel_dict_to_snake_dict
+from ..module_utils.ec2 import ensure_ec2_tags
+from ..module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ..module_utils.tagging import boto3_tag_list_to_ansible_dict
+
+
+class AnsibleEc2Igw():
+
+ def __init__(self, module, results):
+ self._module = module
+ self._results = results
+ self._connection = self._module.client(
+ 'ec2', retry_decorator=AWSRetry.jittered_backoff()
+ )
+ self._check_mode = self._module.check_mode
+
+ def process(self):
+ vpc_id = self._module.params.get('vpc_id')
+ state = self._module.params.get('state', 'present')
+ tags = self._module.params.get('tags')
+ purge_tags = self._module.params.get('purge_tags')
+
+ if state == 'present':
+ self.ensure_igw_present(vpc_id, tags, purge_tags)
+ elif state == 'absent':
+ self.ensure_igw_absent(vpc_id)
+
+ def get_matching_igw(self, vpc_id):
+ filters = ansible_dict_to_boto3_filter_list({'attachment.vpc-id': vpc_id})
+ igws = []
+ try:
+ response = self._connection.describe_internet_gateways(aws_retry=True, Filters=filters)
+ igws = response.get('InternetGateways', [])
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self._module.fail_json_aws(e)
+
+ igw = None
+ if len(igws) > 1:
+ self._module.fail_json(
+ msg='EC2 returned more than one Internet Gateway for VPC {0}, aborting'
+ .format(vpc_id))
+ elif igws:
+ igw = camel_dict_to_snake_dict(igws[0])
+
+ return igw
+
+ @staticmethod
+ def get_igw_info(igw, vpc_id):
+ return {
+ 'gateway_id': igw['internet_gateway_id'],
+ 'tags': boto3_tag_list_to_ansible_dict(igw['tags']),
+ 'vpc_id': vpc_id
+ }
+
+ def ensure_igw_absent(self, vpc_id):
+ igw = self.get_matching_igw(vpc_id)
+ if igw is None:
+ return self._results
+
+ if self._check_mode:
+ self._results['changed'] = True
+ return self._results
+
+ try:
+ self._results['changed'] = True
+ self._connection.detach_internet_gateway(
+ aws_retry=True,
+ InternetGatewayId=igw['internet_gateway_id'],
+ VpcId=vpc_id
+ )
+ self._connection.delete_internet_gateway(
+ aws_retry=True,
+ InternetGatewayId=igw['internet_gateway_id']
+ )
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self._module.fail_json_aws(e, msg="Unable to delete Internet Gateway")
+
+ return self._results
+
+ def ensure_igw_present(self, vpc_id, tags, purge_tags):
+ igw = self.get_matching_igw(vpc_id)
+
+ if igw is None:
+ if self._check_mode:
+ self._results['changed'] = True
+ self._results['gateway_id'] = None
+ return self._results
+
+ try:
+ response = self._connection.create_internet_gateway(aws_retry=True)
+
+ # Ensure the gateway exists before trying to attach it or add tags
+ waiter = get_waiter(self._connection, 'internet_gateway_exists')
+ waiter.wait(InternetGatewayIds=[response['InternetGateway']['InternetGatewayId']])
+
+ igw = camel_dict_to_snake_dict(response['InternetGateway'])
+ self._connection.attach_internet_gateway(
+ aws_retry=True,
+ InternetGatewayId=igw['internet_gateway_id'],
+ VpcId=vpc_id
+ )
+ self._results['changed'] = True
+ except botocore.exceptions.WaiterError as e:
+ self._module.fail_json_aws(e, msg="No Internet Gateway exists.")
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self._module.fail_json_aws(e, msg='Unable to create Internet Gateway')
+
+ self._results['changed'] |= ensure_ec2_tags(
+ self._connection, self._module, igw['internet_gateway_id'],
+ resource_type='internet-gateway', tags=tags, purge_tags=purge_tags
+ )
+ igw_info = self.get_igw_info(self.get_matching_igw(vpc_id), vpc_id)
+ self._results.update(igw_info)
+
+ return self._results
+
+
+def main():
+ argument_spec = dict(
+ vpc_id=dict(required=True),
+ state=dict(default='present', choices=['present', 'absent']),
+ tags=dict(required=False, type='dict', aliases=['resource_tags']),
+ purge_tags=dict(default=True, type='bool'),
+ )
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ supports_check_mode=True,
+ )
+ results = dict(
+ changed=False
+ )
+ igw_manager = AnsibleEc2Igw(module=module, results=results)
+ igw_manager.process()
+
+ module.exit_json(**results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_facts.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,218 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+module: ec2_vpc_nat_gateway_info
+short_description: Retrieves AWS VPC Managed Nat Gateway details using AWS methods.
+version_added: 1.0.0
+description:
+ - Gets various details related to AWS VPC Managed Nat Gateways
+ - This module was called C(ec2_vpc_nat_gateway_facts) before Ansible 2.9. The usage did not change.
+options:
+ nat_gateway_ids:
+ description:
+ - List of specific nat gateway IDs to fetch details for.
+ type: list
+ elements: str
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNatGateways.html)
+ for possible filters.
+ type: dict
+author: Karen Cheng (@Etherdaemon)
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+'''
+
+EXAMPLES = r'''
+# Simple example of listing all nat gateways
+- name: List all managed nat gateways in ap-southeast-2
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ register: all_ngws
+
+- name: Debugging the result
+ ansible.builtin.debug:
+ msg: "{{ all_ngws.result }}"
+
+- name: Get details on specific nat gateways
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ nat_gateway_ids:
+ - nat-1234567891234567
+ - nat-7654321987654321
+ region: ap-southeast-2
+ register: specific_ngws
+
+- name: Get all nat gateways with specific filters
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ filters:
+ state: ['pending']
+ register: pending_ngws
+
+- name: Get nat gateways with specific filter
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ filters:
+ subnet-id: subnet-12345678
+ state: ['available']
+ register: existing_nat_gateways
+'''
+
+RETURN = r'''
+changed:
+ description: True if listing the internet gateways succeeds
+ type: bool
+ returned: always
+ sample: false
+result:
+ description:
+ - The result of the describe, converted to ansible snake case style.
+ - See also U(http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_nat_gateways)
+ returned: suceess
+ type: list
+ contains:
+ create_time:
+ description: The date and time the NAT gateway was created
+ returned: always
+ type: str
+ sample: "2021-03-11T22:43:25+00:00"
+ delete_time:
+ description: The date and time the NAT gateway was deleted
+ returned: when the NAT gateway has been deleted
+ type: str
+ sample: "2021-03-11T22:43:25+00:00"
+ nat_gateway_addresses:
+ description: List containing a dictionary with the IP addresses and network interface associated with the NAT gateway
+ returned: always
+ type: dict
+ contains:
+ allocation_id:
+ description: The allocation ID of the Elastic IP address that's associated with the NAT gateway
+ returned: always
+ type: str
+ sample: eipalloc-0853e66a40803da76
+ network_interface_id:
+ description: The ID of the network interface associated with the NAT gateway
+ returned: always
+ type: str
+ sample: eni-0a37acdbe306c661c
+ private_ip:
+ description: The private IP address associated with the Elastic IP address
+ returned: always
+ type: str
+ sample: 10.0.238.227
+ public_ip:
+ description: The Elastic IP address associated with the NAT gateway
+ returned: always
+ type: str
+ sample: 34.204.123.52
+ nat_gateway_id:
+ description: The ID of the NAT gateway
+ returned: always
+ type: str
+ sample: nat-0c242a2397acf6173
+ state:
+ description: state of the NAT gateway
+ returned: always
+ type: str
+ sample: available
+ subnet_id:
+ description: The ID of the subnet in which the NAT gateway is located
+ returned: always
+ type: str
+ sample: subnet-098c447465d4344f9
+ vpc_id:
+ description: The ID of the VPC in which the NAT gateway is located
+ returned: always
+ type: str
+ sample: vpc-02f37f48438ab7d4c
+ tags:
+ description: Tags applied to the NAT gateway
+ returned: always
+ type: dict
+ sample:
+ Tag1: tag1
+ Tag_2: tag_2
+'''
+
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import camel_dict_to_snake_dict
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result
+
+
+@AWSRetry.jittered_backoff(retries=10)
+def _describe_nat_gateways(client, module, **params):
+ try:
+ paginator = client.get_paginator('describe_nat_gateways')
+ return paginator.paginate(**params).build_full_result()['NatGateways']
+ except is_boto3_error_code('InvalidNatGatewayID.NotFound'):
+ module.exit_json(msg="NAT gateway not found.")
+ except is_boto3_error_code('NatGatewayMalformed'): # pylint: disable=duplicate-except
+ module.fail_json_aws(msg="NAT gateway id is malformed.")
+
+
+def get_nat_gateways(client, module):
+ params = dict()
+ nat_gateways = list()
+
+ params['Filter'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
+ params['NatGatewayIds'] = module.params.get('nat_gateway_ids')
+
+ try:
+ result = normalize_boto3_result(_describe_nat_gateways(client, module, **params))
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, 'Unable to describe NAT gateways.')
+
+ for gateway in result:
+ # Turn the boto3 result into ansible_friendly_snaked_names
+ converted_gateway = camel_dict_to_snake_dict(gateway)
+ if 'tags' in converted_gateway:
+ # Turn the boto3 result into ansible friendly tag dictionary
+ converted_gateway['tags'] = boto3_tag_list_to_ansible_dict(converted_gateway['tags'])
+ nat_gateways.append(converted_gateway)
+
+ return nat_gateways
+
+
+def main():
+ argument_spec = dict(
+ filters=dict(default={}, type='dict'),
+ nat_gateway_ids=dict(default=[], type='list', elements='str'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec,
+ supports_check_mode=True,)
+ if module._name == 'ec2_vpc_nat_gateway_facts':
+ module.deprecate("The 'ec2_vpc_nat_gateway_facts' module has been renamed to 'ec2_vpc_nat_gateway_info'",
+ date='2021-12-01', collection_name='amazon.aws')
+
+ try:
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ results = get_nat_gateways(connection, module)
+
+ module.exit_json(result=results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_info.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,218 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+module: ec2_vpc_nat_gateway_info
+short_description: Retrieves AWS VPC Managed Nat Gateway details using AWS methods.
+version_added: 1.0.0
+description:
+ - Gets various details related to AWS VPC Managed Nat Gateways
+ - This module was called C(ec2_vpc_nat_gateway_facts) before Ansible 2.9. The usage did not change.
+options:
+ nat_gateway_ids:
+ description:
+ - List of specific nat gateway IDs to fetch details for.
+ type: list
+ elements: str
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeNatGateways.html)
+ for possible filters.
+ type: dict
+author: Karen Cheng (@Etherdaemon)
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+'''
+
+EXAMPLES = r'''
+# Simple example of listing all nat gateways
+- name: List all managed nat gateways in ap-southeast-2
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ register: all_ngws
+
+- name: Debugging the result
+ ansible.builtin.debug:
+ msg: "{{ all_ngws.result }}"
+
+- name: Get details on specific nat gateways
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ nat_gateway_ids:
+ - nat-1234567891234567
+ - nat-7654321987654321
+ region: ap-southeast-2
+ register: specific_ngws
+
+- name: Get all nat gateways with specific filters
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ filters:
+ state: ['pending']
+ register: pending_ngws
+
+- name: Get nat gateways with specific filter
+ amazon.aws.ec2_vpc_nat_gateway_info:
+ region: ap-southeast-2
+ filters:
+ subnet-id: subnet-12345678
+ state: ['available']
+ register: existing_nat_gateways
+'''
+
+RETURN = r'''
+changed:
+ description: True if listing the internet gateways succeeds
+ type: bool
+ returned: always
+ sample: false
+result:
+ description:
+ - The result of the describe, converted to ansible snake case style.
+ - See also U(http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_nat_gateways)
+ returned: suceess
+ type: list
+ contains:
+ create_time:
+ description: The date and time the NAT gateway was created
+ returned: always
+ type: str
+ sample: "2021-03-11T22:43:25+00:00"
+ delete_time:
+ description: The date and time the NAT gateway was deleted
+ returned: when the NAT gateway has been deleted
+ type: str
+ sample: "2021-03-11T22:43:25+00:00"
+ nat_gateway_addresses:
+ description: List containing a dictionary with the IP addresses and network interface associated with the NAT gateway
+ returned: always
+ type: dict
+ contains:
+ allocation_id:
+ description: The allocation ID of the Elastic IP address that's associated with the NAT gateway
+ returned: always
+ type: str
+ sample: eipalloc-0853e66a40803da76
+ network_interface_id:
+ description: The ID of the network interface associated with the NAT gateway
+ returned: always
+ type: str
+ sample: eni-0a37acdbe306c661c
+ private_ip:
+ description: The private IP address associated with the Elastic IP address
+ returned: always
+ type: str
+ sample: 10.0.238.227
+ public_ip:
+ description: The Elastic IP address associated with the NAT gateway
+ returned: always
+ type: str
+ sample: 34.204.123.52
+ nat_gateway_id:
+ description: The ID of the NAT gateway
+ returned: always
+ type: str
+ sample: nat-0c242a2397acf6173
+ state:
+ description: state of the NAT gateway
+ returned: always
+ type: str
+ sample: available
+ subnet_id:
+ description: The ID of the subnet in which the NAT gateway is located
+ returned: always
+ type: str
+ sample: subnet-098c447465d4344f9
+ vpc_id:
+ description: The ID of the VPC in which the NAT gateway is located
+ returned: always
+ type: str
+ sample: vpc-02f37f48438ab7d4c
+ tags:
+ description: Tags applied to the NAT gateway
+ returned: always
+ type: dict
+ sample:
+ Tag1: tag1
+ Tag_2: tag_2
+'''
+
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import camel_dict_to_snake_dict
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result
+
+
+@AWSRetry.jittered_backoff(retries=10)
+def _describe_nat_gateways(client, module, **params):
+ try:
+ paginator = client.get_paginator('describe_nat_gateways')
+ return paginator.paginate(**params).build_full_result()['NatGateways']
+ except is_boto3_error_code('InvalidNatGatewayID.NotFound'):
+ module.exit_json(msg="NAT gateway not found.")
+ except is_boto3_error_code('NatGatewayMalformed'): # pylint: disable=duplicate-except
+ module.fail_json_aws(msg="NAT gateway id is malformed.")
+
+
+def get_nat_gateways(client, module):
+ params = dict()
+ nat_gateways = list()
+
+ params['Filter'] = ansible_dict_to_boto3_filter_list(module.params.get('filters'))
+ params['NatGatewayIds'] = module.params.get('nat_gateway_ids')
+
+ try:
+ result = normalize_boto3_result(_describe_nat_gateways(client, module, **params))
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, 'Unable to describe NAT gateways.')
+
+ for gateway in result:
+ # Turn the boto3 result into ansible_friendly_snaked_names
+ converted_gateway = camel_dict_to_snake_dict(gateway)
+ if 'tags' in converted_gateway:
+ # Turn the boto3 result into ansible friendly tag dictionary
+ converted_gateway['tags'] = boto3_tag_list_to_ansible_dict(converted_gateway['tags'])
+ nat_gateways.append(converted_gateway)
+
+ return nat_gateways
+
+
+def main():
+ argument_spec = dict(
+ filters=dict(default={}, type='dict'),
+ nat_gateway_ids=dict(default=[], type='list', elements='str'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec,
+ supports_check_mode=True,)
+ if module._name == 'ec2_vpc_nat_gateway_facts':
+ module.deprecate("The 'ec2_vpc_nat_gateway_facts' module has been renamed to 'ec2_vpc_nat_gateway_info'",
+ date='2021-12-01', collection_name='amazon.aws')
+
+ try:
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS')
+
+ results = get_nat_gateways(connection, module)
+
+ module.exit_json(result=results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_nat_gateway.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,958 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_vpc_nat_gateway
+version_added: 1.0.0
+short_description: Manage AWS VPC NAT Gateways.
+description:
+ - Ensure the state of AWS VPC NAT Gateways based on their id, allocation and subnet ids.
+options:
+ state:
+ description:
+ - Ensure NAT Gateway is present or absent.
+ default: "present"
+ choices: ["present", "absent"]
+ type: str
+ nat_gateway_id:
+ description:
+ - The id AWS dynamically allocates to the NAT Gateway on creation.
+ This is required when the absent option is present.
+ type: str
+ subnet_id:
+ description:
+ - The id of the subnet to create the NAT Gateway in. This is required
+ with the present option.
+ type: str
+ allocation_id:
+ description:
+ - The id of the elastic IP allocation. If this is not passed and the
+ eip_address is not passed. An EIP is generated for this NAT Gateway.
+ type: str
+ eip_address:
+ description:
+ - The elastic IP address of the EIP you want attached to this NAT Gateway.
+ If this is not passed and the allocation_id is not passed,
+ an EIP is generated for this NAT Gateway.
+ type: str
+ if_exist_do_not_create:
+ description:
+ - if a NAT Gateway exists already in the subnet_id, then do not create a new one.
+ required: false
+ default: false
+ type: bool
+ tags:
+ description:
+ - A dict of tags to apply to the NAT gateway.
+ - To remove all tags set I(tags={}) and I(purge_tags=true).
+ aliases: [ 'resource_tags' ]
+ type: dict
+ version_added: 1.4.0
+ purge_tags:
+ description:
+ - Remove tags not listed in I(tags).
+ type: bool
+ default: true
+ version_added: 1.4.0
+ release_eip:
+ description:
+ - Deallocate the EIP from the VPC.
+ - Option is only valid with the absent state.
+ - You should use this with the wait option. Since you can not release an address while a delete operation is happening.
+ default: false
+ type: bool
+ wait:
+ description:
+ - Wait for operation to complete before returning.
+ default: false
+ type: bool
+ wait_timeout:
+ description:
+ - How many seconds to wait for an operation to complete before timing out.
+ default: 320
+ type: int
+ client_token:
+ description:
+ - Optional unique token to be used during create to ensure idempotency.
+ When specifying this option, ensure you specify the eip_address parameter
+ as well otherwise any subsequent runs will fail.
+ type: str
+author:
+ - Allen Sanabria (@linuxdynasty)
+ - Jon Hadfield (@jonhadfield)
+ - Karen Cheng (@Etherdaemon)
+ - Alina Buzachis (@alinabuzachis)
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+'''
+
+EXAMPLES = r'''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Create new nat gateway with client token.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ eip_address: 52.1.1.1
+ region: ap-southeast-2
+ client_token: abcd-12345678
+ register: new_nat_gateway
+
+- name: Create new nat gateway using an allocation-id.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ allocation_id: eipalloc-12345678
+ region: ap-southeast-2
+ register: new_nat_gateway
+
+- name: Create new nat gateway, using an EIP address and wait for available status.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ eip_address: 52.1.1.1
+ wait: true
+ region: ap-southeast-2
+ register: new_nat_gateway
+
+- name: Create new nat gateway and allocate new EIP.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ wait: true
+ region: ap-southeast-2
+ register: new_nat_gateway
+
+- name: Create new nat gateway and allocate new EIP if a nat gateway does not yet exist in the subnet.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ wait: true
+ region: ap-southeast-2
+ if_exist_do_not_create: true
+ register: new_nat_gateway
+
+- name: Delete nat gateway using discovered nat gateways from facts module.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: absent
+ region: ap-southeast-2
+ wait: true
+ nat_gateway_id: "{{ item.NatGatewayId }}"
+ release_eip: true
+ register: delete_nat_gateway_result
+ loop: "{{ gateways_to_remove.result }}"
+
+- name: Delete nat gateway and wait for deleted status.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: absent
+ nat_gateway_id: nat-12345678
+ wait: true
+ wait_timeout: 500
+ region: ap-southeast-2
+
+- name: Delete nat gateway and release EIP.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: absent
+ nat_gateway_id: nat-12345678
+ release_eip: true
+ wait: yes
+ wait_timeout: 300
+ region: ap-southeast-2
+
+- name: Create new nat gateway using allocation-id and tags.
+ amazon.aws.ec2_vpc_nat_gateway:
+ state: present
+ subnet_id: subnet-12345678
+ allocation_id: eipalloc-12345678
+ region: ap-southeast-2
+ tags:
+ Tag1: tag1
+ Tag2: tag2
+ register: new_nat_gateway
+
+- name: Update tags without purge
+ amazon.aws.ec2_vpc_nat_gateway:
+ subnet_id: subnet-12345678
+ allocation_id: eipalloc-12345678
+ region: ap-southeast-2
+ purge_tags: no
+ tags:
+ Tag3: tag3
+ wait: yes
+ register: update_tags_nat_gateway
+'''
+
+RETURN = r'''
+create_time:
+ description: The ISO 8601 date time format in UTC.
+ returned: In all cases.
+ type: str
+ sample: "2016-03-05T05:19:20.282000+00:00'"
+nat_gateway_id:
+ description: id of the VPC NAT Gateway
+ returned: In all cases.
+ type: str
+ sample: "nat-0d1e3a878585988f8"
+subnet_id:
+ description: id of the Subnet
+ returned: In all cases.
+ type: str
+ sample: "subnet-12345"
+state:
+ description: The current state of the NAT Gateway.
+ returned: In all cases.
+ type: str
+ sample: "available"
+tags:
+ description: The tags associated the VPC NAT Gateway.
+ type: dict
+ returned: When tags are present.
+ sample:
+ tags:
+ "Ansible": "Test"
+vpc_id:
+ description: id of the VPC.
+ returned: In all cases.
+ type: str
+ sample: "vpc-12345"
+nat_gateway_addresses:
+ description: List of dictionaries containing the public_ip, network_interface_id, private_ip, and allocation_id.
+ returned: In all cases.
+ type: str
+ sample: [
+ {
+ 'public_ip': '52.52.52.52',
+ 'network_interface_id': 'eni-12345',
+ 'private_ip': '10.0.0.100',
+ 'allocation_id': 'eipalloc-12345'
+ }
+ ]
+'''
+
+import datetime
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ..module_utils.core import AnsibleAWSModule
+from ..module_utils.core import is_boto3_error_code
+from ..module_utils.waiters import get_waiter
+from ..module_utils.ec2 import AWSRetry
+from ..module_utils.ec2 import camel_dict_to_snake_dict
+from ..module_utils.ec2 import describe_ec2_tags
+from ..module_utils.ec2 import ensure_ec2_tags
+from ..module_utils.tagging import boto3_tag_specifications
+
+
+@AWSRetry.jittered_backoff(retries=10)
+def _describe_nat_gateways(client, **params):
+ try:
+ paginator = client.get_paginator('describe_nat_gateways')
+ return paginator.paginate(**params).build_full_result()['NatGateways']
+ except is_boto3_error_code('InvalidNatGatewayID.NotFound'):
+ return None
+
+
+def wait_for_status(client, module, waiter_name, nat_gateway_id):
+ wait_timeout = module.params.get('wait_timeout')
+ try:
+ waiter = get_waiter(client, waiter_name)
+ attempts = 1 + int(wait_timeout / waiter.config.delay)
+ waiter.wait(
+ NatGatewayIds=[nat_gateway_id],
+ WaiterConfig={'MaxAttempts': attempts}
+ )
+ except botocore.exceptions.WaiterError as e:
+ module.fail_json_aws(e, msg="NAT gateway failed to reach expected state.")
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Unable to wait for NAT gateway state to update.")
+
+
+def get_nat_gateways(client, module, subnet_id=None, nat_gateway_id=None, states=None):
+ """Retrieve a list of NAT Gateways
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+
+ Kwargs:
+ subnet_id (str): The subnet_id the nat resides in.
+ nat_gateway_id (str): The Amazon NAT id.
+ states (list): States available (pending, failed, available, deleting, and deleted)
+ default=None
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> subnet_id = 'subnet-12345678'
+ >>> get_nat_gateways(client, module, subnet_id)
+ [
+ {
+ "create_time": "2016-03-05T00:33:21.209000+00:00",
+ "delete_time": "2016-03-05T00:36:37.329000+00:00",
+ "nat_gateway_addresses": [
+ {
+ "public_ip": "55.55.55.55",
+ "network_interface_id": "eni-1234567",
+ "private_ip": "10.0.0.102",
+ "allocation_id": "eipalloc-1234567"
+ }
+ ],
+ "nat_gateway_id": "nat-123456789",
+ "state": "deleted",
+ "subnet_id": "subnet-123456789",
+ "tags": {},
+ "vpc_id": "vpc-12345678"
+ }
+ ]
+
+ Returns:
+ list
+ """
+
+ params = dict()
+ existing_gateways = list()
+
+ if not states:
+ states = ['available', 'pending']
+ if nat_gateway_id:
+ params['NatGatewayIds'] = [nat_gateway_id]
+ else:
+ params['Filter'] = [
+ {
+ 'Name': 'subnet-id',
+ 'Values': [subnet_id]
+ },
+ {
+ 'Name': 'state',
+ 'Values': states
+ }
+ ]
+
+ try:
+ gateways = _describe_nat_gateways(client, **params)
+ if gateways:
+ for gw in gateways:
+ existing_gateways.append(camel_dict_to_snake_dict(gw))
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e)
+
+ return existing_gateways
+
+
+def gateway_in_subnet_exists(client, module, subnet_id, allocation_id=None):
+ """Retrieve all NAT Gateways for a subnet.
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+ subnet_id (str): The subnet_id the nat resides in.
+
+ Kwargs:
+ allocation_id (str): The EIP Amazon identifier.
+ default = None
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> subnet_id = 'subnet-1234567'
+ >>> allocation_id = 'eipalloc-1234567'
+ >>> gateway_in_subnet_exists(client, module, subnet_id, allocation_id)
+ (
+ [
+ {
+ "create_time": "2016-03-05T00:33:21.209000+00:00",
+ "delete_time": "2016-03-05T00:36:37.329000+00:00",
+ "nat_gateway_addresses": [
+ {
+ "public_ip": "55.55.55.55",
+ "network_interface_id": "eni-1234567",
+ "private_ip": "10.0.0.102",
+ "allocation_id": "eipalloc-1234567"
+ }
+ ],
+ "nat_gateway_id": "nat-123456789",
+ "state": "deleted",
+ "subnet_id": "subnet-123456789",
+ "tags": {},
+ "vpc_id": "vpc-1234567"
+ }
+ ],
+ False
+ )
+
+ Returns:
+ Tuple (list, bool)
+ """
+
+ allocation_id_exists = False
+ gateways = []
+ states = ['available', 'pending']
+
+ gws_retrieved = (get_nat_gateways(client, module, subnet_id, states=states))
+
+ if gws_retrieved:
+ for gw in gws_retrieved:
+ for address in gw['nat_gateway_addresses']:
+ if allocation_id:
+ if address.get('allocation_id') == allocation_id:
+ allocation_id_exists = True
+ gateways.append(gw)
+ else:
+ gateways.append(gw)
+
+ return gateways, allocation_id_exists
+
+
+def get_eip_allocation_id_by_address(client, module, eip_address):
+ """Release an EIP from your EIP Pool
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+ eip_address (str): The Elastic IP Address of the EIP.
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> eip_address = '52.87.29.36'
+ >>> get_eip_allocation_id_by_address(client, module, eip_address)
+ (
+ 'eipalloc-36014da3', ''
+ )
+
+ Returns:
+ Tuple (str, str)
+ """
+
+ params = {
+ 'PublicIps': [eip_address],
+ }
+ allocation_id = None
+ msg = ''
+
+ try:
+ allocations = client.describe_addresses(aws_retry=True, **params)['Addresses']
+
+ if len(allocations) == 1:
+ allocation = allocations[0]
+ else:
+ allocation = None
+
+ if allocation:
+ if allocation.get('Domain') != 'vpc':
+ msg = (
+ "EIP {0} is a non-VPC EIP, please allocate a VPC scoped EIP"
+ .format(eip_address)
+ )
+ else:
+ allocation_id = allocation.get('AllocationId')
+
+ except is_boto3_error_code('InvalidAddress.Malformed') as e:
+ module.fail_json(msg='EIP address {0} is invalid.'.format(eip_address))
+ except is_boto3_error_code('InvalidAddress.NotFound') as e: # pylint: disable=duplicate-except
+ msg = (
+ "EIP {0} does not exist".format(eip_address)
+ )
+ allocation_id = None
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e)
+
+ return allocation_id, msg
+
+
+def allocate_eip_address(client, module):
+ """Release an EIP from your EIP Pool
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> allocate_eip_address(client, module)
+ (
+ True, '', ''
+ )
+
+ Returns:
+ Tuple (bool, str, str)
+ """
+
+ new_eip = None
+ msg = ''
+ params = {
+ 'Domain': 'vpc',
+ }
+
+ if module.check_mode:
+ ip_allocated = True
+ new_eip = None
+ return ip_allocated, msg, new_eip
+
+ try:
+ new_eip = client.allocate_address(aws_retry=True, **params)['AllocationId']
+ ip_allocated = True
+ msg = 'eipalloc id {0} created'.format(new_eip)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e)
+
+ return ip_allocated, msg, new_eip
+
+
+def release_address(client, module, allocation_id):
+ """Release an EIP from your EIP Pool
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+ allocation_id (str): The eip Amazon identifier.
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> allocation_id = "eipalloc-123456"
+ >>> release_address(client, module, allocation_id)
+ (
+ True, ''
+ )
+
+ Returns:
+ Tuple (bool, str)
+ """
+
+ msg = ''
+
+ if module.check_mode:
+ return True, ''
+
+ ip_released = False
+
+ try:
+ client.describe_addresses(aws_retry=True, AllocationIds=[allocation_id])
+ except is_boto3_error_code('InvalidAllocationID.NotFound') as e:
+ # IP address likely already released
+ # Happens with gateway in 'deleted' state that
+ # still lists associations
+ return True, e
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e)
+
+ try:
+ client.release_address(aws_retry=True, AllocationId=allocation_id)
+ ip_released = True
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e)
+
+ return ip_released, msg
+
+
+def create(client, module, subnet_id, allocation_id, tags, client_token=None,
+ wait=False):
+ """Create an Amazon NAT Gateway.
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+ subnet_id (str): The subnet_id the nat resides in
+ allocation_id (str): The eip Amazon identifier
+ tags (dict): Tags to associate to the NAT gateway
+ purge_tags (bool): If true, remove tags not listed in I(tags)
+ type: bool
+
+ Kwargs:
+ wait (bool): Wait for the nat to be in the deleted state before returning.
+ default = False
+ client_token (str):
+ default = None
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> subnet_id = 'subnet-1234567'
+ >>> allocation_id = 'eipalloc-1234567'
+ >>> create(client, module, subnet_id, allocation_id, wait=True)
+ [
+ true,
+ {
+ "create_time": "2016-03-05T00:33:21.209000+00:00",
+ "delete_time": "2016-03-05T00:36:37.329000+00:00",
+ "nat_gateway_addresses": [
+ {
+ "public_ip": "55.55.55.55",
+ "network_interface_id": "eni-1234567",
+ "private_ip": "10.0.0.102",
+ "allocation_id": "eipalloc-1234567"
+ }
+ ],
+ "nat_gateway_id": "nat-123456789",
+ "state": "deleted",
+ "subnet_id": "subnet-1234567",
+ "tags": {},
+ "vpc_id": "vpc-1234567"
+ },
+ ""
+ ]
+
+ Returns:
+ Tuple (bool, str, list)
+ """
+
+ params = {
+ 'SubnetId': subnet_id,
+ 'AllocationId': allocation_id
+ }
+ request_time = datetime.datetime.utcnow()
+ changed = False
+ token_provided = False
+ result = {}
+ msg = ''
+
+ if client_token:
+ token_provided = True
+ params['ClientToken'] = client_token
+
+ if tags:
+ params["TagSpecifications"] = boto3_tag_specifications(tags, ['natgateway'])
+
+ if module.check_mode:
+ changed = True
+ return changed, result, msg
+
+ try:
+ result = camel_dict_to_snake_dict(
+ client.create_nat_gateway(aws_retry=True, **params)["NatGateway"]
+ )
+ changed = True
+
+ create_time = result['create_time'].replace(tzinfo=None)
+
+ if token_provided and (request_time > create_time):
+ changed = False
+
+ elif wait and result.get('state') != 'available':
+ wait_for_status(client, module, 'nat_gateway_available', result['nat_gateway_id'])
+
+ # Get new result
+ result = camel_dict_to_snake_dict(
+ _describe_nat_gateways(client, NatGatewayIds=[result['nat_gateway_id']])[0]
+ )
+
+ except is_boto3_error_code('IdempotentParameterMismatch') as e:
+ msg = (
+ 'NAT Gateway does not support update and token has already been provided:' + e
+ )
+ changed = False
+ result = None
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e)
+
+ result['tags'] = describe_ec2_tags(client, module, result['nat_gateway_id'],
+ resource_type='natgateway')
+
+ return changed, result, msg
+
+
+def pre_create(client, module, subnet_id, tags, purge_tags, allocation_id=None, eip_address=None,
+ if_exist_do_not_create=False, wait=False, client_token=None):
+ """Create an Amazon NAT Gateway.
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+ subnet_id (str): The subnet_id the nat resides in
+ tags (dict): Tags to associate to the NAT gateway
+ purge_tags (bool): If true, remove tags not listed in I(tags)
+
+ Kwargs:
+ allocation_id (str): The EIP Amazon identifier.
+ default = None
+ eip_address (str): The Elastic IP Address of the EIP.
+ default = None
+ if_exist_do_not_create (bool): if a nat gateway already exists in this
+ subnet, than do not create another one.
+ default = False
+ wait (bool): Wait for the nat to be in the deleted state before returning.
+ default = False
+ client_token (str):
+ default = None
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> subnet_id = 'subnet-w4t12897'
+ >>> allocation_id = 'eipalloc-36014da3'
+ >>> pre_create(client, module, subnet_id, allocation_id, if_exist_do_not_create=True, wait=True)
+ [
+ true,
+ "",
+ {
+ "create_time": "2016-03-05T00:33:21.209000+00:00",
+ "delete_time": "2016-03-05T00:36:37.329000+00:00",
+ "nat_gateway_addresses": [
+ {
+ "public_ip": "52.87.29.36",
+ "network_interface_id": "eni-5579742d",
+ "private_ip": "10.0.0.102",
+ "allocation_id": "eipalloc-36014da3"
+ }
+ ],
+ "nat_gateway_id": "nat-03835afb6e31df79b",
+ "state": "deleted",
+ "subnet_id": "subnet-w4t12897",
+ "tags": {},
+ "vpc_id": "vpc-w68571b5"
+ }
+ ]
+
+ Returns:
+ Tuple (bool, str, list)
+ """
+
+ changed = False
+ msg = ''
+ results = {}
+
+ if not allocation_id and not eip_address:
+ existing_gateways, allocation_id_exists = (
+ gateway_in_subnet_exists(client, module, subnet_id)
+ )
+
+ if len(existing_gateways) > 0 and if_exist_do_not_create:
+ results = existing_gateways[0]
+ changed |= ensure_ec2_tags(client, module, results['nat_gateway_id'],
+ resource_type='natgateway', tags=tags,
+ purge_tags=purge_tags)
+
+ results['tags'] = describe_ec2_tags(client, module, results['nat_gateway_id'],
+ resource_type='natgateway')
+
+ if changed:
+ return changed, msg, results
+
+ changed = False
+ msg = (
+ 'NAT Gateway {0} already exists in subnet_id {1}'
+ .format(
+ existing_gateways[0]['nat_gateway_id'], subnet_id
+ )
+ )
+ return changed, msg, results
+ else:
+ changed, msg, allocation_id = (
+ allocate_eip_address(client, module)
+ )
+
+ if not changed:
+ return changed, msg, dict()
+
+ elif eip_address or allocation_id:
+ if eip_address and not allocation_id:
+ allocation_id, msg = (
+ get_eip_allocation_id_by_address(
+ client, module, eip_address
+ )
+ )
+ if not allocation_id:
+ changed = False
+ return changed, msg, dict()
+
+ existing_gateways, allocation_id_exists = (
+ gateway_in_subnet_exists(
+ client, module, subnet_id, allocation_id
+ )
+ )
+
+ if len(existing_gateways) > 0 and (allocation_id_exists or if_exist_do_not_create):
+ results = existing_gateways[0]
+ changed |= ensure_ec2_tags(client, module, results['nat_gateway_id'],
+ resource_type='natgateway', tags=tags,
+ purge_tags=purge_tags)
+
+ results['tags'] = describe_ec2_tags(client, module, results['nat_gateway_id'],
+ resource_type='natgateway')
+
+ if changed:
+ return changed, msg, results
+
+ changed = False
+ msg = (
+ 'NAT Gateway {0} already exists in subnet_id {1}'
+ .format(
+ existing_gateways[0]['nat_gateway_id'], subnet_id
+ )
+ )
+ return changed, msg, results
+
+ changed, results, msg = create(
+ client, module, subnet_id, allocation_id, tags, client_token, wait
+ )
+
+ return changed, msg, results
+
+
+def remove(client, module, nat_gateway_id, wait=False, release_eip=False):
+ """Delete an Amazon NAT Gateway.
+ Args:
+ client (botocore.client.EC2): Boto3 client
+ module: AnsibleAWSModule class instance
+ nat_gateway_id (str): The Amazon nat id
+
+ Kwargs:
+ wait (bool): Wait for the nat to be in the deleted state before returning.
+ release_eip (bool): Once the nat has been deleted, you can deallocate the eip from the vpc.
+
+ Basic Usage:
+ >>> client = boto3.client('ec2')
+ >>> module = AnsibleAWSModule(...)
+ >>> nat_gw_id = 'nat-03835afb6e31df79b'
+ >>> remove(client, module, nat_gw_id, wait=True, release_eip=True)
+ [
+ true,
+ "",
+ {
+ "create_time": "2016-03-05T00:33:21.209000+00:00",
+ "delete_time": "2016-03-05T00:36:37.329000+00:00",
+ "nat_gateway_addresses": [
+ {
+ "public_ip": "52.87.29.36",
+ "network_interface_id": "eni-5579742d",
+ "private_ip": "10.0.0.102",
+ "allocation_id": "eipalloc-36014da3"
+ }
+ ],
+ "nat_gateway_id": "nat-03835afb6e31df79b",
+ "state": "deleted",
+ "subnet_id": "subnet-w4t12897",
+ "tags": {},
+ "vpc_id": "vpc-w68571b5"
+ }
+ ]
+
+ Returns:
+ Tuple (bool, str, list)
+ """
+
+ params = {
+ 'NatGatewayId': nat_gateway_id
+ }
+ changed = False
+ results = {}
+ states = ['pending', 'available']
+ msg = ''
+
+ if module.check_mode:
+ changed = True
+ return changed, msg, results
+
+ try:
+ gw_list = (
+ get_nat_gateways(
+ client, module, nat_gateway_id=nat_gateway_id,
+ states=states
+ )
+ )
+
+ if len(gw_list) == 1:
+ results = gw_list[0]
+ client.delete_nat_gateway(aws_retry=True, **params)
+ allocation_id = (
+ results['nat_gateway_addresses'][0]['allocation_id']
+ )
+ changed = True
+ msg = (
+ 'NAT gateway {0} is in a deleting state. Delete was successful'
+ .format(nat_gateway_id)
+ )
+
+ if wait and results.get('state') != 'deleted':
+ wait_for_status(client, module, 'nat_gateway_deleted', nat_gateway_id)
+
+ # Get new results
+ results = camel_dict_to_snake_dict(
+ _describe_nat_gateways(client, NatGatewayIds=[nat_gateway_id])[0]
+ )
+ results['tags'] = describe_ec2_tags(client, module, nat_gateway_id,
+ resource_type='natgateway')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e)
+
+ if release_eip:
+ eip_released, msg = (
+ release_address(client, module, allocation_id))
+ if not eip_released:
+ module.fail_json(
+ msg="Failed to release EIP {0}: {1}".format(allocation_id, msg)
+ )
+
+ return changed, msg, results
+
+
+def main():
+ argument_spec = dict(
+ subnet_id=dict(type='str'),
+ eip_address=dict(type='str'),
+ allocation_id=dict(type='str'),
+ if_exist_do_not_create=dict(type='bool', default=False),
+ state=dict(default='present', choices=['present', 'absent']),
+ wait=dict(type='bool', default=False),
+ wait_timeout=dict(type='int', default=320, required=False),
+ release_eip=dict(type='bool', default=False),
+ nat_gateway_id=dict(type='str'),
+ client_token=dict(type='str', no_log=False),
+ tags=dict(required=False, type='dict', aliases=['resource_tags']),
+ purge_tags=dict(default=True, type='bool'),
+ )
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ supports_check_mode=True,
+ mutually_exclusive=[
+ ['allocation_id', 'eip_address']
+ ],
+ required_if=[['state', 'absent', ['nat_gateway_id']],
+ ['state', 'present', ['subnet_id']]],
+ )
+
+ state = module.params.get('state').lower()
+ subnet_id = module.params.get('subnet_id')
+ allocation_id = module.params.get('allocation_id')
+ eip_address = module.params.get('eip_address')
+ nat_gateway_id = module.params.get('nat_gateway_id')
+ wait = module.params.get('wait')
+ release_eip = module.params.get('release_eip')
+ client_token = module.params.get('client_token')
+ if_exist_do_not_create = module.params.get('if_exist_do_not_create')
+ tags = module.params.get('tags')
+ purge_tags = module.params.get('purge_tags')
+
+ try:
+ client = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff())
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='Failed to connect to AWS.')
+
+ changed = False
+ msg = ''
+
+ if state == 'present':
+ changed, msg, results = (
+ pre_create(
+ client, module, subnet_id, tags, purge_tags, allocation_id, eip_address,
+ if_exist_do_not_create, wait, client_token
+ )
+ )
+ else:
+ changed, msg, results = (
+ remove(
+ client, module, nat_gateway_id, wait, release_eip
+ )
+ )
+
+ module.exit_json(msg=msg, changed=changed, **results)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,9 +15,6 @@
- Gather information about ec2 VPCs in AWS
- This module was called C(ec2_vpc_net_facts) before Ansible 2.9. The usage did not change.
author: "Rob White (@wimnat)"
-requirements:
- - boto3
- - botocore
options:
vpc_ids:
description:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,9 +15,6 @@
- Gather information about ec2 VPCs in AWS
- This module was called C(ec2_vpc_net_facts) before Ansible 2.9. The usage did not change.
author: "Rob White (@wimnat)"
-requirements:
- - boto3
- - botocore
options:
vpc_ids:
description:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_net.py 2021-11-12 18:13:53.000000000 +0000
@@ -78,9 +78,6 @@
duplicate VPCs created.
type: bool
default: false
-requirements:
- - boto3
- - botocore
extends_documentation_fragment:
- amazon.aws.aws
- amazon.aws.ec2
@@ -459,11 +456,7 @@
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
module.fail_json_aws(e, "Unable to associate CIDR {0}.".format(ipv6_cidr))
if ipv6_cidr:
- if 'Ipv6CidrBlockAssociationSet' in vpc_obj.keys():
- module.warn("Only one IPv6 CIDR is permitted per VPC, {0} already has CIDR {1}".format(
- vpc_id,
- vpc_obj['Ipv6CidrBlockAssociationSet'][0]['Ipv6CidrBlock']))
- else:
+ if 'Ipv6CidrBlockAssociationSet' not in vpc_obj.keys():
try:
connection.associate_vpc_cidr_block(AmazonProvidedIpv6CidrBlock=ipv6_cidr, VpcId=vpc_id, aws_retry=True)
changed = True
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_facts.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,283 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_vpc_route_table_info
+version_added: 1.0.0
+short_description: Gather information about ec2 VPC route tables in AWS
+description:
+ - Gather information about ec2 VPC route tables in AWS
+ - This module was called C(ec2_vpc_route_table_facts) before Ansible 2.9. The usage did not change.
+author:
+- "Rob White (@wimnat)"
+- "Mark Chappell (@tremble)"
+options:
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
+ type: dict
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Gather information about all VPC route tables
+ amazon.aws.ec2_vpc_route_table_info:
+
+- name: Gather information about a particular VPC route table using route table ID
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ route-table-id: rtb-00112233
+
+- name: Gather information about any VPC route table with a tag key Name and value Example
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ "tag:Name": Example
+
+- name: Gather information about any VPC route table within VPC with ID vpc-abcdef00
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ vpc-id: vpc-abcdef00
+'''
+
+RETURN = r'''
+route_tables:
+ description:
+ - A list of dictionarys describing route tables
+ - See also U(https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_route_tables)
+ returned: always
+ type: complex
+ contains:
+ associations:
+ description: List of subnets associated with the route table
+ returned: always
+ type: complex
+ contains:
+ main:
+ description: Whether this is the main route table
+ returned: always
+ type: bool
+ sample: false
+ id:
+ description: ID of association between route table and subnet
+ returned: always
+ type: str
+ sample: rtbassoc-ab47cfc3
+ route_table_association_id:
+ description: ID of association between route table and subnet
+ returned: always
+ type: str
+ sample: rtbassoc-ab47cfc3
+ route_table_id:
+ description: ID of the route table
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ subnet_id:
+ description: ID of the subnet
+ returned: always
+ type: str
+ sample: subnet-82055af9
+ association_state:
+ description: The state of the association
+ returned: always
+ type: complex
+ contains:
+ state:
+ description: The state of the association
+ returned: always
+ type: str
+ sample: associated
+ state_message:
+ description: Additional information about the state of the association
+ returned: when available
+ type: str
+ sample: 'Creating association'
+ id:
+ description: ID of the route table (same as route_table_id for backwards compatibility)
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ owner_id:
+ description: ID of the account which owns the route table
+ returned: always
+ type: str
+ sample: '012345678912'
+ propagating_vgws:
+ description: List of Virtual Private Gateways propagating routes
+ returned: always
+ type: list
+ sample: []
+ route_table_id:
+ description: ID of the route table
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ routes:
+ description: List of routes in the route table
+ returned: always
+ type: complex
+ contains:
+ destination_cidr_block:
+ description: CIDR block of destination
+ returned: always
+ type: str
+ sample: 10.228.228.0/22
+ gateway_id:
+ description: ID of the gateway
+ returned: when gateway is local or internet gateway
+ type: str
+ sample: local
+ instance_id:
+ description:
+ - ID of a NAT instance.
+ - Empty unless the route is via an EC2 instance
+ returned: always
+ type: str
+ sample: i-abcd123456789
+ instance_owner_id:
+ description:
+ - AWS account owning the NAT instance
+ - Empty unless the route is via an EC2 instance
+ returned: always
+ type: str
+ sample: 123456789012
+ network_interface_id:
+ description:
+ - The ID of the network interface
+ - Empty unless the route is via an EC2 instance
+ returned: always
+ type: str
+ sample: 123456789012
+ nat_gateway_id:
+ description: ID of the NAT gateway
+ returned: when the route is via a NAT gateway
+ type: str
+ sample: local
+ origin:
+ description: mechanism through which the route is in the table
+ returned: always
+ type: str
+ sample: CreateRouteTable
+ state:
+ description: state of the route
+ returned: always
+ type: str
+ sample: active
+ tags:
+ description: Tags applied to the route table
+ returned: always
+ type: dict
+ sample:
+ Name: Public route table
+ Public: 'true'
+ vpc_id:
+ description: ID for the VPC in which the route lives
+ returned: always
+ type: str
+ sample: vpc-6e2d2407
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+
+
+@AWSRetry.jittered_backoff()
+def describe_route_tables_with_backoff(connection, **params):
+ try:
+ paginator = connection.get_paginator('describe_route_tables')
+ return paginator.paginate(**params).build_full_result()
+ except is_boto3_error_code('InvalidRouteTableID.NotFound'):
+ return None
+
+
+def normalize_route(route):
+ # Historically these were all there, but set to null when empty'
+ for legacy_key in ['DestinationCidrBlock', 'GatewayId', 'InstanceId',
+ 'Origin', 'State', 'NetworkInterfaceId']:
+ if legacy_key not in route:
+ route[legacy_key] = None
+ route['InterfaceId'] = route['NetworkInterfaceId']
+ return route
+
+
+def normalize_association(assoc):
+ # Name change between boto v2 and boto v3, return both
+ assoc['Id'] = assoc['RouteTableAssociationId']
+ return assoc
+
+
+def normalize_route_table(table):
+ table['tags'] = boto3_tag_list_to_ansible_dict(table['Tags'])
+ table['Associations'] = [normalize_association(assoc) for assoc in table['Associations']]
+ table['Routes'] = [normalize_route(route) for route in table['Routes']]
+ table['Id'] = table['RouteTableId']
+ del table['Tags']
+ return camel_dict_to_snake_dict(table, ignore_list=['tags'])
+
+
+def normalize_results(results):
+ """
+ We used to be a boto v2 module, make sure that the old return values are
+ maintained and the shape of the return values are what people expect
+ """
+
+ routes = [normalize_route_table(route) for route in results['RouteTables']]
+ del results['RouteTables']
+ results = camel_dict_to_snake_dict(results)
+ results['route_tables'] = routes
+ return results
+
+
+def list_ec2_vpc_route_tables(connection, module):
+
+ filters = ansible_dict_to_boto3_filter_list(module.params.get("filters"))
+
+ try:
+ results = describe_route_tables_with_backoff(connection, Filters=filters)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to get route tables")
+
+ results = normalize_results(results)
+ module.exit_json(changed=False, **results)
+
+
+def main():
+ argument_spec = dict(
+ filters=dict(default=None, type='dict'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec,
+ supports_check_mode=True)
+ if module._name == 'ec2_vpc_route_table_facts':
+ module.deprecate("The 'ec2_vpc_route_table_facts' module has been renamed to 'ec2_vpc_route_table_info'",
+ date='2021-12-01', collection_name='amazon.aws')
+
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff(retries=10))
+
+ list_ec2_vpc_route_tables(connection, module)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_info.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,283 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_vpc_route_table_info
+version_added: 1.0.0
+short_description: Gather information about ec2 VPC route tables in AWS
+description:
+ - Gather information about ec2 VPC route tables in AWS
+ - This module was called C(ec2_vpc_route_table_facts) before Ansible 2.9. The usage did not change.
+author:
+- "Rob White (@wimnat)"
+- "Mark Chappell (@tremble)"
+options:
+ filters:
+ description:
+ - A dict of filters to apply. Each dict item consists of a filter key and a filter value.
+ See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeRouteTables.html) for possible filters.
+ type: dict
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+- name: Gather information about all VPC route tables
+ amazon.aws.ec2_vpc_route_table_info:
+
+- name: Gather information about a particular VPC route table using route table ID
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ route-table-id: rtb-00112233
+
+- name: Gather information about any VPC route table with a tag key Name and value Example
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ "tag:Name": Example
+
+- name: Gather information about any VPC route table within VPC with ID vpc-abcdef00
+ amazon.aws.ec2_vpc_route_table_info:
+ filters:
+ vpc-id: vpc-abcdef00
+'''
+
+RETURN = r'''
+route_tables:
+ description:
+ - A list of dictionarys describing route tables
+ - See also U(https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_route_tables)
+ returned: always
+ type: complex
+ contains:
+ associations:
+ description: List of subnets associated with the route table
+ returned: always
+ type: complex
+ contains:
+ main:
+ description: Whether this is the main route table
+ returned: always
+ type: bool
+ sample: false
+ id:
+ description: ID of association between route table and subnet
+ returned: always
+ type: str
+ sample: rtbassoc-ab47cfc3
+ route_table_association_id:
+ description: ID of association between route table and subnet
+ returned: always
+ type: str
+ sample: rtbassoc-ab47cfc3
+ route_table_id:
+ description: ID of the route table
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ subnet_id:
+ description: ID of the subnet
+ returned: always
+ type: str
+ sample: subnet-82055af9
+ association_state:
+ description: The state of the association
+ returned: always
+ type: complex
+ contains:
+ state:
+ description: The state of the association
+ returned: always
+ type: str
+ sample: associated
+ state_message:
+ description: Additional information about the state of the association
+ returned: when available
+ type: str
+ sample: 'Creating association'
+ id:
+ description: ID of the route table (same as route_table_id for backwards compatibility)
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ owner_id:
+ description: ID of the account which owns the route table
+ returned: always
+ type: str
+ sample: '012345678912'
+ propagating_vgws:
+ description: List of Virtual Private Gateways propagating routes
+ returned: always
+ type: list
+ sample: []
+ route_table_id:
+ description: ID of the route table
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ routes:
+ description: List of routes in the route table
+ returned: always
+ type: complex
+ contains:
+ destination_cidr_block:
+ description: CIDR block of destination
+ returned: always
+ type: str
+ sample: 10.228.228.0/22
+ gateway_id:
+ description: ID of the gateway
+ returned: when gateway is local or internet gateway
+ type: str
+ sample: local
+ instance_id:
+ description:
+ - ID of a NAT instance.
+ - Empty unless the route is via an EC2 instance
+ returned: always
+ type: str
+ sample: i-abcd123456789
+ instance_owner_id:
+ description:
+ - AWS account owning the NAT instance
+ - Empty unless the route is via an EC2 instance
+ returned: always
+ type: str
+ sample: 123456789012
+ network_interface_id:
+ description:
+ - The ID of the network interface
+ - Empty unless the route is via an EC2 instance
+ returned: always
+ type: str
+ sample: 123456789012
+ nat_gateway_id:
+ description: ID of the NAT gateway
+ returned: when the route is via a NAT gateway
+ type: str
+ sample: local
+ origin:
+ description: mechanism through which the route is in the table
+ returned: always
+ type: str
+ sample: CreateRouteTable
+ state:
+ description: state of the route
+ returned: always
+ type: str
+ sample: active
+ tags:
+ description: Tags applied to the route table
+ returned: always
+ type: dict
+ sample:
+ Name: Public route table
+ Public: 'true'
+ vpc_id:
+ description: ID for the VPC in which the route lives
+ returned: always
+ type: str
+ sample: vpc-6e2d2407
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Handled by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
+
+
+@AWSRetry.jittered_backoff()
+def describe_route_tables_with_backoff(connection, **params):
+ try:
+ paginator = connection.get_paginator('describe_route_tables')
+ return paginator.paginate(**params).build_full_result()
+ except is_boto3_error_code('InvalidRouteTableID.NotFound'):
+ return None
+
+
+def normalize_route(route):
+ # Historically these were all there, but set to null when empty'
+ for legacy_key in ['DestinationCidrBlock', 'GatewayId', 'InstanceId',
+ 'Origin', 'State', 'NetworkInterfaceId']:
+ if legacy_key not in route:
+ route[legacy_key] = None
+ route['InterfaceId'] = route['NetworkInterfaceId']
+ return route
+
+
+def normalize_association(assoc):
+ # Name change between boto v2 and boto v3, return both
+ assoc['Id'] = assoc['RouteTableAssociationId']
+ return assoc
+
+
+def normalize_route_table(table):
+ table['tags'] = boto3_tag_list_to_ansible_dict(table['Tags'])
+ table['Associations'] = [normalize_association(assoc) for assoc in table['Associations']]
+ table['Routes'] = [normalize_route(route) for route in table['Routes']]
+ table['Id'] = table['RouteTableId']
+ del table['Tags']
+ return camel_dict_to_snake_dict(table, ignore_list=['tags'])
+
+
+def normalize_results(results):
+ """
+ We used to be a boto v2 module, make sure that the old return values are
+ maintained and the shape of the return values are what people expect
+ """
+
+ routes = [normalize_route_table(route) for route in results['RouteTables']]
+ del results['RouteTables']
+ results = camel_dict_to_snake_dict(results)
+ results['route_tables'] = routes
+ return results
+
+
+def list_ec2_vpc_route_tables(connection, module):
+
+ filters = ansible_dict_to_boto3_filter_list(module.params.get("filters"))
+
+ try:
+ results = describe_route_tables_with_backoff(connection, Filters=filters)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to get route tables")
+
+ results = normalize_results(results)
+ module.exit_json(changed=False, **results)
+
+
+def main():
+ argument_spec = dict(
+ filters=dict(default=None, type='dict'),
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec,
+ supports_check_mode=True)
+ if module._name == 'ec2_vpc_route_table_facts':
+ module.deprecate("The 'ec2_vpc_route_table_facts' module has been renamed to 'ec2_vpc_route_table_info'",
+ date='2021-12-01', collection_name='amazon.aws')
+
+ connection = module.client('ec2', retry_decorator=AWSRetry.jittered_backoff(retries=10))
+
+ list_ec2_vpc_route_tables(connection, module)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_route_table.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,722 @@
+#!/usr/bin/python
+#
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+
+DOCUMENTATION = r'''
+---
+module: ec2_vpc_route_table
+version_added: 1.0.0
+short_description: Manage route tables for AWS virtual private clouds
+description:
+ - Manage route tables for AWS virtual private clouds
+author:
+- Robert Estelle (@erydo)
+- Rob White (@wimnat)
+- Will Thames (@willthames)
+options:
+ lookup:
+ description: Look up route table by either tags or by route table ID. Non-unique tag lookup will fail.
+ If no tags are specified then no lookup for an existing route table is performed and a new
+ route table will be created. To change tags of a route table you must look up by id.
+ default: tag
+ choices: [ 'tag', 'id' ]
+ type: str
+ propagating_vgw_ids:
+ description: Enable route propagation from virtual gateways specified by ID.
+ type: list
+ elements: str
+ purge_routes:
+ description: Purge existing routes that are not found in routes.
+ type: bool
+ default: 'yes'
+ purge_subnets:
+ description: Purge existing subnets that are not found in subnets. Ignored unless the subnets option is supplied.
+ default: 'true'
+ type: bool
+ purge_tags:
+ description: Purge existing tags that are not found in route table.
+ type: bool
+ default: 'no'
+ route_table_id:
+ description:
+ - The ID of the route table to update or delete.
+ - Required when I(lookup=id).
+ type: str
+ routes:
+ description: List of routes in the route table.
+ Routes are specified as dicts containing the keys 'dest' and one of 'gateway_id',
+ 'instance_id', 'network_interface_id', or 'vpc_peering_connection_id'.
+ If 'gateway_id' is specified, you can refer to the VPC's IGW by using the value 'igw'.
+ Routes are required for present states.
+ type: list
+ elements: dict
+ state:
+ description: Create or destroy the VPC route table.
+ default: present
+ choices: [ 'present', 'absent' ]
+ type: str
+ subnets:
+ description: An array of subnets to add to this route table. Subnets may be specified
+ by either subnet ID, Name tag, or by a CIDR such as '10.0.0.0/24'.
+ type: list
+ elements: str
+ tags:
+ description: >
+ A dictionary of resource tags of the form: C({ tag1: value1, tag2: value2 }). Tags are
+ used to uniquely identify route tables within a VPC when the route_table_id is not supplied.
+ aliases: [ "resource_tags" ]
+ type: dict
+ vpc_id:
+ description:
+ - VPC ID of the VPC in which to create the route table.
+ - Required when I(state=present) or I(lookup=tag).
+ type: str
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+
+'''
+
+EXAMPLES = r'''
+# Note: These examples do not set authentication details, see the AWS Guide for details.
+
+# Basic creation example:
+- name: Set up public subnet route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: vpc-1245678
+ region: us-west-1
+ tags:
+ Name: Public
+ subnets:
+ - "{{ jumpbox_subnet.subnet.id }}"
+ - "{{ frontend_subnet.subnet.id }}"
+ - "{{ vpn_subnet.subnet_id }}"
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: "{{ igw.gateway_id }}"
+ register: public_route_table
+
+- name: Set up NAT-protected route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: vpc-1245678
+ region: us-west-1
+ tags:
+ Name: Internal
+ subnets:
+ - "{{ application_subnet.subnet.id }}"
+ - 'Database Subnet'
+ - '10.0.0.0/8'
+ routes:
+ - dest: 0.0.0.0/0
+ instance_id: "{{ nat.instance_id }}"
+ register: nat_route_table
+
+- name: delete route table
+ amazon.aws.ec2_vpc_route_table:
+ vpc_id: vpc-1245678
+ region: us-west-1
+ route_table_id: "{{ route_table.id }}"
+ lookup: id
+ state: absent
+'''
+
+RETURN = r'''
+route_table:
+ description: Route Table result
+ returned: always
+ type: complex
+ contains:
+ associations:
+ description: List of subnets associated with the route table
+ returned: always
+ type: complex
+ contains:
+ main:
+ description: Whether this is the main route table
+ returned: always
+ type: bool
+ sample: false
+ route_table_association_id:
+ description: ID of association between route table and subnet
+ returned: always
+ type: str
+ sample: rtbassoc-ab47cfc3
+ route_table_id:
+ description: ID of the route table
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ subnet_id:
+ description: ID of the subnet
+ returned: always
+ type: str
+ sample: subnet-82055af9
+ id:
+ description: ID of the route table (same as route_table_id for backwards compatibility)
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ propagating_vgws:
+ description: List of Virtual Private Gateways propagating routes
+ returned: always
+ type: list
+ sample: []
+ route_table_id:
+ description: ID of the route table
+ returned: always
+ type: str
+ sample: rtb-bf779ed7
+ routes:
+ description: List of routes in the route table
+ returned: always
+ type: complex
+ contains:
+ destination_cidr_block:
+ description: CIDR block of destination
+ returned: always
+ type: str
+ sample: 10.228.228.0/22
+ gateway_id:
+ description: ID of the gateway
+ returned: when gateway is local or internet gateway
+ type: str
+ sample: local
+ instance_id:
+ description: ID of a NAT instance
+ returned: when the route is via an EC2 instance
+ type: str
+ sample: i-abcd123456789
+ instance_owner_id:
+ description: AWS account owning the NAT instance
+ returned: when the route is via an EC2 instance
+ type: str
+ sample: 123456789012
+ nat_gateway_id:
+ description: ID of the NAT gateway
+ returned: when the route is via a NAT gateway
+ type: str
+ sample: local
+ origin:
+ description: mechanism through which the route is in the table
+ returned: always
+ type: str
+ sample: CreateRouteTable
+ state:
+ description: state of the route
+ returned: always
+ type: str
+ sample: active
+ tags:
+ description: Tags applied to the route table
+ returned: always
+ type: dict
+ sample:
+ Name: Public route table
+ Public: 'true'
+ vpc_id:
+ description: ID for the VPC in which the route lives
+ returned: always
+ type: str
+ sample: vpc-6e2d2407
+'''
+
+import re
+from time import sleep
+
+try:
+ import botocore
+except ImportError:
+ pass # caught by AnsibleAWSModule
+
+from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
+from ansible.module_utils.common.dict_transformations import snake_dict_to_camel_dict
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import describe_ec2_tags
+from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ensure_ec2_tags
+from ansible_collections.amazon.aws.plugins.module_utils.waiters import get_waiter
+
+
+@AWSRetry.jittered_backoff()
+def describe_subnets_with_backoff(connection, **params):
+ paginator = connection.get_paginator('describe_subnets')
+ return paginator.paginate(**params).build_full_result()['Subnets']
+
+
+@AWSRetry.jittered_backoff()
+def describe_igws_with_backoff(connection, **params):
+ paginator = connection.get_paginator('describe_internet_gateways')
+ return paginator.paginate(**params).build_full_result()['InternetGateways']
+
+
+@AWSRetry.jittered_backoff()
+def describe_route_tables_with_backoff(connection, **params):
+ try:
+ paginator = connection.get_paginator('describe_route_tables')
+ return paginator.paginate(**params).build_full_result()['RouteTables']
+ except is_boto3_error_code('InvalidRouteTableID.NotFound'):
+ return None
+
+
+def find_subnets(connection, module, vpc_id, identified_subnets):
+ """
+ Finds a list of subnets, each identified either by a raw ID, a unique
+ 'Name' tag, or a CIDR such as 10.0.0.0/8.
+ """
+ CIDR_RE = re.compile(r'^(\d{1,3}\.){3}\d{1,3}/\d{1,2}$')
+ SUBNET_RE = re.compile(r'^subnet-[A-z0-9]+$')
+
+ subnet_ids = []
+ subnet_names = []
+ subnet_cidrs = []
+ for subnet in (identified_subnets or []):
+ if re.match(SUBNET_RE, subnet):
+ subnet_ids.append(subnet)
+ elif re.match(CIDR_RE, subnet):
+ subnet_cidrs.append(subnet)
+ else:
+ subnet_names.append(subnet)
+
+ subnets_by_id = []
+ if subnet_ids:
+ filters = ansible_dict_to_boto3_filter_list({'vpc-id': vpc_id})
+ try:
+ subnets_by_id = describe_subnets_with_backoff(connection, SubnetIds=subnet_ids, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't find subnet with id %s" % subnet_ids)
+
+ subnets_by_cidr = []
+ if subnet_cidrs:
+ filters = ansible_dict_to_boto3_filter_list({'vpc-id': vpc_id, 'cidr': subnet_cidrs})
+ try:
+ subnets_by_cidr = describe_subnets_with_backoff(connection, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't find subnet with cidr %s" % subnet_cidrs)
+
+ subnets_by_name = []
+ if subnet_names:
+ filters = ansible_dict_to_boto3_filter_list({'vpc-id': vpc_id, 'tag:Name': subnet_names})
+ try:
+ subnets_by_name = describe_subnets_with_backoff(connection, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't find subnet with names %s" % subnet_names)
+
+ for name in subnet_names:
+ matching_count = len([1 for s in subnets_by_name for t in s.get('Tags', []) if t['Key'] == 'Name' and t['Value'] == name])
+ if matching_count == 0:
+ module.fail_json(msg='Subnet named "{0}" does not exist'.format(name))
+ elif matching_count > 1:
+ module.fail_json(msg='Multiple subnets named "{0}"'.format(name))
+
+ return subnets_by_id + subnets_by_cidr + subnets_by_name
+
+
+def find_igw(connection, module, vpc_id):
+ """
+ Finds the Internet gateway for the given VPC ID.
+ """
+ filters = ansible_dict_to_boto3_filter_list({'attachment.vpc-id': vpc_id})
+ try:
+ igw = describe_igws_with_backoff(connection, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg='No IGW found for VPC {0}'.format(vpc_id))
+ if len(igw) == 1:
+ return igw[0]['InternetGatewayId']
+ elif len(igw) == 0:
+ module.fail_json(msg='No IGWs found for VPC {0}'.format(vpc_id))
+ else:
+ module.fail_json(msg='Multiple IGWs found for VPC {0}'.format(vpc_id))
+
+
+def tags_match(match_tags, candidate_tags):
+ return all((k in candidate_tags and candidate_tags[k] == v
+ for k, v in match_tags.items()))
+
+
+def get_route_table_by_id(connection, module, route_table_id):
+
+ route_table = None
+ try:
+ route_tables = describe_route_tables_with_backoff(connection, RouteTableIds=[route_table_id])
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't get route table")
+ if route_tables:
+ route_table = route_tables[0]
+
+ return route_table
+
+
+def get_route_table_by_tags(connection, module, vpc_id, tags):
+ count = 0
+ route_table = None
+ filters = ansible_dict_to_boto3_filter_list({'vpc-id': vpc_id})
+ try:
+ route_tables = describe_route_tables_with_backoff(connection, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't get route table")
+ for table in route_tables:
+ this_tags = describe_ec2_tags(connection, module, table['RouteTableId'])
+ if tags_match(tags, this_tags):
+ route_table = table
+ count += 1
+
+ if count > 1:
+ module.fail_json(msg="Tags provided do not identify a unique route table")
+ else:
+ return route_table
+
+
+def route_spec_matches_route(route_spec, route):
+ if route_spec.get('GatewayId') and 'nat-' in route_spec['GatewayId']:
+ route_spec['NatGatewayId'] = route_spec.pop('GatewayId')
+ if route_spec.get('GatewayId') and 'vpce-' in route_spec['GatewayId']:
+ if route_spec.get('DestinationCidrBlock', '').startswith('pl-'):
+ route_spec['DestinationPrefixListId'] = route_spec.pop('DestinationCidrBlock')
+
+ return set(route_spec.items()).issubset(route.items())
+
+
+def route_spec_matches_route_cidr(route_spec, route):
+ return route_spec['DestinationCidrBlock'] == route.get('DestinationCidrBlock')
+
+
+def rename_key(d, old_key, new_key):
+ d[new_key] = d.pop(old_key)
+
+
+def index_of_matching_route(route_spec, routes_to_match):
+ for i, route in enumerate(routes_to_match):
+ if route_spec_matches_route(route_spec, route):
+ return "exact", i
+ elif 'Origin' in route_spec and route_spec['Origin'] != 'EnableVgwRoutePropagation':
+ if route_spec_matches_route_cidr(route_spec, route):
+ return "replace", i
+
+
+def ensure_routes(connection=None, module=None, route_table=None, route_specs=None,
+ propagating_vgw_ids=None, check_mode=None, purge_routes=None):
+ routes_to_match = list(route_table['Routes'])
+ route_specs_to_create = []
+ route_specs_to_recreate = []
+ for route_spec in route_specs:
+ match = index_of_matching_route(route_spec, routes_to_match)
+ if match is None:
+ if route_spec.get('DestinationCidrBlock'):
+ route_specs_to_create.append(route_spec)
+ else:
+ module.warn("Skipping creating {0} because it has no destination cidr block. "
+ "To add VPC endpoints to route tables use the ec2_vpc_endpoint module.".format(route_spec))
+ else:
+ if match[0] == "replace":
+ if route_spec.get('DestinationCidrBlock'):
+ route_specs_to_recreate.append(route_spec)
+ else:
+ module.warn("Skipping recreating route {0} because it has no destination cidr block.".format(route_spec))
+ del routes_to_match[match[1]]
+
+ routes_to_delete = []
+ if purge_routes:
+ for r in routes_to_match:
+ if not r.get('DestinationCidrBlock'):
+ module.warn("Skipping purging route {0} because it has no destination cidr block. "
+ "To remove VPC endpoints from route tables use the ec2_vpc_endpoint module.".format(r))
+ continue
+ if r['Origin'] == 'CreateRoute':
+ routes_to_delete.append(r)
+
+ changed = bool(routes_to_delete or route_specs_to_create or route_specs_to_recreate)
+ if changed and not check_mode:
+ for route in routes_to_delete:
+ try:
+ connection.delete_route(
+ aws_retry=True,
+ RouteTableId=route_table['RouteTableId'],
+ DestinationCidrBlock=route['DestinationCidrBlock'])
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't delete route")
+
+ for route_spec in route_specs_to_recreate:
+ try:
+ connection.replace_route(aws_retry=True, RouteTableId=route_table['RouteTableId'], **route_spec)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't recreate route")
+
+ for route_spec in route_specs_to_create:
+ try:
+ connection.create_route(aws_retry=True, RouteTableId=route_table['RouteTableId'], **route_spec)
+ except is_boto3_error_code('RouteAlreadyExists'):
+ changed = False
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Couldn't create route")
+
+ return {'changed': bool(changed)}
+
+
+def ensure_subnet_association(connection=None, module=None, vpc_id=None, route_table_id=None, subnet_id=None,
+ check_mode=None):
+ filters = ansible_dict_to_boto3_filter_list({'association.subnet-id': subnet_id, 'vpc-id': vpc_id})
+ try:
+ route_tables = describe_route_tables_with_backoff(connection, Filters=filters)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't get route tables")
+ for route_table in route_tables:
+ if route_table['RouteTableId'] is None:
+ continue
+ for a in route_table['Associations']:
+ if a['Main']:
+ continue
+ if a['SubnetId'] == subnet_id:
+ if route_table['RouteTableId'] == route_table_id:
+ return {'changed': False, 'association_id': a['RouteTableAssociationId']}
+ else:
+ if check_mode:
+ return {'changed': True}
+ try:
+ connection.disassociate_route_table(
+ aws_retry=True, AssociationId=a['RouteTableAssociationId'])
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't disassociate subnet from route table")
+
+ try:
+ association_id = connection.associate_route_table(aws_retry=True,
+ RouteTableId=route_table_id,
+ SubnetId=subnet_id)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't associate subnet with route table")
+ return {'changed': True, 'association_id': association_id}
+
+
+def ensure_subnet_associations(connection=None, module=None, route_table=None, subnets=None,
+ check_mode=None, purge_subnets=None):
+ current_association_ids = [a['RouteTableAssociationId'] for a in route_table['Associations'] if not a['Main']]
+ new_association_ids = []
+ changed = False
+ for subnet in subnets:
+ result = ensure_subnet_association(
+ connection=connection, module=module, vpc_id=route_table['VpcId'],
+ route_table_id=route_table['RouteTableId'], subnet_id=subnet['SubnetId'],
+ check_mode=check_mode)
+ changed = changed or result['changed']
+ if changed and check_mode:
+ return {'changed': True}
+ new_association_ids.append(result['association_id'])
+
+ if purge_subnets:
+ to_delete = [a_id for a_id in current_association_ids
+ if a_id not in new_association_ids]
+
+ for a_id in to_delete:
+ changed = True
+ if not check_mode:
+ try:
+ connection.disassociate_route_table(aws_retry=True, AssociationId=a_id)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't disassociate subnet from route table")
+
+ return {'changed': changed}
+
+
+def ensure_propagation(connection=None, module=None, route_table=None, propagating_vgw_ids=None,
+ check_mode=None):
+ changed = False
+ gateways = [gateway['GatewayId'] for gateway in route_table['PropagatingVgws']]
+ to_add = set(propagating_vgw_ids) - set(gateways)
+ if to_add:
+ changed = True
+ if not check_mode:
+ for vgw_id in to_add:
+ try:
+ connection.enable_vgw_route_propagation(
+ aws_retry=True,
+ RouteTableId=route_table['RouteTableId'],
+ GatewayId=vgw_id)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't enable route propagation")
+
+ return {'changed': changed}
+
+
+def ensure_route_table_absent(connection, module):
+
+ lookup = module.params.get('lookup')
+ route_table_id = module.params.get('route_table_id')
+ tags = module.params.get('tags')
+ vpc_id = module.params.get('vpc_id')
+ purge_subnets = module.params.get('purge_subnets')
+
+ if lookup == 'tag':
+ if tags is not None:
+ route_table = get_route_table_by_tags(connection, module, vpc_id, tags)
+ else:
+ route_table = None
+ elif lookup == 'id':
+ route_table = get_route_table_by_id(connection, module, route_table_id)
+
+ if route_table is None:
+ return {'changed': False}
+
+ # disassociate subnets before deleting route table
+ if not module.check_mode:
+ ensure_subnet_associations(connection=connection, module=module, route_table=route_table,
+ subnets=[], check_mode=False, purge_subnets=purge_subnets)
+ try:
+ connection.delete_route_table(aws_retry=True, RouteTableId=route_table['RouteTableId'])
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Error deleting route table")
+
+ return {'changed': True}
+
+
+def get_route_table_info(connection, module, route_table):
+ result = get_route_table_by_id(connection, module, route_table['RouteTableId'])
+ try:
+ result['Tags'] = describe_ec2_tags(connection, module, route_table['RouteTableId'])
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Couldn't get tags for route table")
+ result = camel_dict_to_snake_dict(result, ignore_list=['Tags'])
+ # backwards compatibility
+ result['id'] = result['route_table_id']
+ return result
+
+
+def create_route_spec(connection, module, vpc_id):
+ routes = module.params.get('routes')
+
+ for route_spec in routes:
+ rename_key(route_spec, 'dest', 'destination_cidr_block')
+
+ if route_spec.get('gateway_id') and route_spec['gateway_id'].lower() == 'igw':
+ igw = find_igw(connection, module, vpc_id)
+ route_spec['gateway_id'] = igw
+ if route_spec.get('gateway_id') and route_spec['gateway_id'].startswith('nat-'):
+ rename_key(route_spec, 'gateway_id', 'nat_gateway_id')
+
+ return snake_dict_to_camel_dict(routes, capitalize_first=True)
+
+
+def ensure_route_table_present(connection, module):
+
+ lookup = module.params.get('lookup')
+ propagating_vgw_ids = module.params.get('propagating_vgw_ids')
+ purge_routes = module.params.get('purge_routes')
+ purge_subnets = module.params.get('purge_subnets')
+ purge_tags = module.params.get('purge_tags')
+ route_table_id = module.params.get('route_table_id')
+ subnets = module.params.get('subnets')
+ tags = module.params.get('tags')
+ vpc_id = module.params.get('vpc_id')
+ routes = create_route_spec(connection, module, vpc_id)
+
+ changed = False
+ tags_valid = False
+
+ if lookup == 'tag':
+ if tags is not None:
+ try:
+ route_table = get_route_table_by_tags(connection, module, vpc_id, tags)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Error finding route table with lookup 'tag'")
+ else:
+ route_table = None
+ elif lookup == 'id':
+ try:
+ route_table = get_route_table_by_id(connection, module, route_table_id)
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Error finding route table with lookup 'id'")
+
+ # If no route table returned then create new route table
+ if route_table is None:
+ changed = True
+ if not module.check_mode:
+ try:
+ route_table = connection.create_route_table(aws_retry=True, VpcId=vpc_id)['RouteTable']
+ # try to wait for route table to be present before moving on
+ get_waiter(
+ connection, 'route_table_exists'
+ ).wait(
+ RouteTableIds=[route_table['RouteTableId']],
+ )
+ except botocore.exceptions.WaiterError as e:
+ module.fail_json_aws(e, msg='Timeout waiting for route table creation')
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ module.fail_json_aws(e, msg="Error creating route table")
+ else:
+ route_table = {"id": "rtb-xxxxxxxx", "route_table_id": "rtb-xxxxxxxx", "vpc_id": vpc_id}
+ module.exit_json(changed=changed, route_table=route_table)
+
+ if routes is not None:
+ result = ensure_routes(connection=connection, module=module, route_table=route_table,
+ route_specs=routes, propagating_vgw_ids=propagating_vgw_ids,
+ check_mode=module.check_mode, purge_routes=purge_routes)
+ changed = changed or result['changed']
+
+ if propagating_vgw_ids is not None:
+ result = ensure_propagation(connection=connection, module=module, route_table=route_table,
+ propagating_vgw_ids=propagating_vgw_ids, check_mode=module.check_mode)
+ changed = changed or result['changed']
+
+ if not tags_valid and tags is not None:
+ changed |= ensure_ec2_tags(connection, module, route_table['RouteTableId'],
+ tags=tags, purge_tags=purge_tags,
+ retry_codes=['InvalidRouteTableID.NotFound'])
+ route_table['Tags'] = describe_ec2_tags(connection, module, route_table['RouteTableId'])
+
+ if subnets is not None:
+ associated_subnets = find_subnets(connection, module, vpc_id, subnets)
+
+ result = ensure_subnet_associations(connection=connection, module=module, route_table=route_table,
+ subnets=associated_subnets, check_mode=module.check_mode,
+ purge_subnets=purge_subnets)
+ changed = changed or result['changed']
+
+ if changed:
+ # pause to allow route table routes/subnets/associations to be updated before exiting with final state
+ sleep(5)
+ module.exit_json(changed=changed, route_table=get_route_table_info(connection, module, route_table))
+
+
+def main():
+ argument_spec = dict(
+ lookup=dict(default='tag', choices=['tag', 'id']),
+ propagating_vgw_ids=dict(type='list', elements='str'),
+ purge_routes=dict(default=True, type='bool'),
+ purge_subnets=dict(default=True, type='bool'),
+ purge_tags=dict(default=False, type='bool'),
+ route_table_id=dict(),
+ routes=dict(default=[], type='list', elements='dict'),
+ state=dict(default='present', choices=['present', 'absent']),
+ subnets=dict(type='list', elements='str'),
+ tags=dict(type='dict', aliases=['resource_tags']),
+ vpc_id=dict()
+ )
+
+ module = AnsibleAWSModule(argument_spec=argument_spec,
+ required_if=[['lookup', 'id', ['route_table_id']],
+ ['lookup', 'tag', ['vpc_id']],
+ ['state', 'present', ['vpc_id']]],
+ supports_check_mode=True)
+
+ # The tests for RouteTable existing uses its own decorator, we can safely
+ # retry on InvalidRouteTableID.NotFound
+ retry_decorator = AWSRetry.jittered_backoff(retries=10, catch_extra_error_codes=['InvalidRouteTableID.NotFound'])
+ connection = module.client('ec2', retry_decorator=retry_decorator)
+
+ state = module.params.get('state')
+
+ if state == 'present':
+ result = ensure_route_table_present(connection, module)
+ elif state == 'absent':
+ result = ensure_route_table_absent(connection, module)
+
+ module.exit_json(**result)
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_facts.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_facts.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_facts.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_facts.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,9 +15,6 @@
- Gather information about ec2 VPC subnets in AWS
- This module was called C(ec2_vpc_subnet_facts) before Ansible 2.9. The usage did not change.
author: "Rob White (@wimnat)"
-requirements:
- - boto3
- - botocore
options:
subnet_ids:
description:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_info.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_info.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_info.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet_info.py 2021-11-12 18:13:53.000000000 +0000
@@ -15,9 +15,6 @@
- Gather information about ec2 VPC subnets in AWS
- This module was called C(ec2_vpc_subnet_facts) before Ansible 2.9. The usage did not change.
author: "Rob White (@wimnat)"
-requirements:
- - boto3
- - botocore
options:
subnet_ids:
description:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/ec2_vpc_subnet.py 2021-11-12 18:13:53.000000000 +0000
@@ -16,7 +16,6 @@
author:
- Robert Estelle (@erydo)
- Brad Davidson (@brandond)
-requirements: [ boto3 ]
options:
az:
description:
@@ -216,10 +215,7 @@
from ..module_utils.core import AnsibleAWSModule
from ..module_utils.ec2 import AWSRetry
from ..module_utils.ec2 import ansible_dict_to_boto3_filter_list
-from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
-from ..module_utils.ec2 import compare_aws_tags
-from ..module_utils.ec2 import describe_ec2_tags
from ..module_utils.ec2 import ensure_ec2_tags
from ..module_utils.waiters import get_waiter
@@ -254,9 +250,8 @@
def waiter_params(module, params, start_time):
- if not module.botocore_at_least("1.7.0"):
- remaining_wait_timeout = int(module.params['wait_timeout'] + start_time - time.time())
- params['WaiterConfig'] = {'Delay': 5, 'MaxAttempts': remaining_wait_timeout // 5}
+ remaining_wait_timeout = int(module.params['wait_timeout'] + start_time - time.time())
+ params['WaiterConfig'] = {'Delay': 5, 'MaxAttempts': remaining_wait_timeout // 5}
return params
@@ -543,9 +538,6 @@
if module.params.get('assign_instances_ipv6') and not module.params.get('ipv6_cidr'):
module.fail_json(msg="assign_instances_ipv6 is True but ipv6_cidr is None or an empty string")
- if not module.botocore_at_least("1.7.0"):
- module.warn("botocore >= 1.7.0 is required to use wait_timeout for custom wait times")
-
retry_decorator = AWSRetry.jittered_backoff(retries=10)
connection = module.client('ec2', retry_decorator=retry_decorator)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/elb_classic_lb.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/elb_classic_lb.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/elb_classic_lb.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/elb_classic_lb.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2160 @@
+#!/usr/bin/python
+# Copyright: Ansible Project
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+
+DOCUMENTATION = '''
+---
+module: elb_classic_lb
+version_added: 1.0.0
+description:
+ - Creates, updates or destroys an Amazon Elastic Load Balancer (ELB).
+ - This module was renamed from C(amazon.aws.ec2_elb_lb) to M(amazon.aws.elb_classic_lb) in version
+ 2.1.0 of the amazon.aws collection.
+short_description: creates, updates or destroys an Amazon ELB.
+author:
+ - "Jim Dalton (@jsdalton)"
+ - "Mark Chappell (@tremble)"
+options:
+ state:
+ description:
+ - Create or destroy the ELB.
+ type: str
+ choices: [ absent, present ]
+ required: true
+ name:
+ description:
+ - The name of the ELB.
+ - The name of an ELB must be less than 32 characters and unique per-region per-account.
+ type: str
+ required: true
+ listeners:
+ description:
+ - List of ports/protocols for this ELB to listen on (see examples).
+ - Required when I(state=present) and the ELB doesn't exist.
+ type: list
+ elements: dict
+ suboptions:
+ load_balancer_port:
+ description:
+ - The port on which the load balancer will listen.
+ type: int
+ required: True
+ instance_port:
+ description:
+ - The port on which the instance is listening.
+ type: int
+ required: True
+ ssl_certificate_id:
+ description:
+ - The Amazon Resource Name (ARN) of the SSL certificate.
+ type: str
+ protocol:
+ description:
+ - The transport protocol to use for routing.
+ - Valid values are C(HTTP), C(HTTPS), C(TCP), or C(SSL).
+ type: str
+ required: True
+ instance_protocol:
+ description:
+ - The protocol to use for routing traffic to instances.
+ - Valid values are C(HTTP), C(HTTPS), C(TCP), or C(SSL),
+ type: str
+ proxy_protocol:
+ description:
+ - Enable proxy protocol for the listener.
+ - Beware, ELB controls for the proxy protocol are based on the
+ I(instance_port). If you have multiple listeners talking to
+ the same I(instance_port), this will affect all of them.
+ type: bool
+ purge_listeners:
+ description:
+ - Purge existing listeners on ELB that are not found in listeners.
+ type: bool
+ default: true
+ instance_ids:
+ description:
+ - List of instance ids to attach to this ELB.
+ type: list
+ elements: str
+ purge_instance_ids:
+ description:
+ - Purge existing instance ids on ELB that are not found in I(instance_ids).
+ type: bool
+ default: false
+ zones:
+ description:
+ - List of availability zones to enable on this ELB.
+ - Mutually exclusive with I(subnets).
+ type: list
+ elements: str
+ purge_zones:
+ description:
+ - Purge existing availability zones on ELB that are not found in I(zones).
+ type: bool
+ default: false
+ security_group_ids:
+ description:
+ - A list of security groups to apply to the ELB.
+ type: list
+ elements: str
+ security_group_names:
+ description:
+ - A list of security group names to apply to the ELB.
+ type: list
+ elements: str
+ health_check:
+ description:
+ - A dictionary of health check configuration settings (see examples).
+ type: dict
+ suboptions:
+ ping_protocol:
+ description:
+ - The protocol which the ELB health check will use when performing a
+ health check.
+ - Valid values are C('HTTP'), C('HTTPS'), C('TCP') and C('SSL').
+ required: true
+ type: str
+ ping_path:
+ description:
+ - The URI path which the ELB health check will query when performing a
+ health check.
+ - Required when I(ping_protocol=HTTP) or I(ping_protocol=HTTPS).
+ required: false
+ type: str
+ ping_port:
+ description:
+ - The TCP port to which the ELB will connect when performing a
+ health check.
+ required: true
+ type: int
+ interval:
+ description:
+ - The approximate interval, in seconds, between health checks of an individual instance.
+ required: true
+ type: int
+ timeout:
+ description:
+ - The amount of time, in seconds, after which no response means a failed health check.
+ aliases: ['response_timeout']
+ required: true
+ type: int
+ unhealthy_threshold:
+ description:
+ - The number of consecutive health check failures required before moving
+ the instance to the Unhealthy state.
+ required: true
+ type: int
+ healthy_threshold:
+ description:
+ - The number of consecutive health checks successes required before moving
+ the instance to the Healthy state.
+ required: true
+ type: int
+ access_logs:
+ description:
+ - A dictionary of access logs configuration settings (see examples).
+ type: dict
+ suboptions:
+ enabled:
+ description:
+ - When set to C(True) will configure delivery of access logs to an S3
+ bucket.
+ - When set to C(False) will disable delivery of access logs.
+ required: false
+ type: bool
+ default: true
+ s3_location:
+ description:
+ - The S3 bucket to deliver access logs to.
+ - See U(https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html)
+ for more information about the necessary S3 bucket policies.
+ - Required when I(enabled=True).
+ required: false
+ type: str
+ s3_prefix:
+ description:
+ - Where in the S3 bucket to deliver the logs.
+ - If the prefix is not provided or set to C(""), the log is placed at the root level of the bucket.
+ required: false
+ type: str
+ default: ""
+ interval:
+ description:
+ - The interval for publishing the access logs to S3.
+ required: false
+ type: int
+ default: 60
+ choices: [ 5, 60 ]
+ subnets:
+ description:
+ - A list of VPC subnets to use when creating the ELB.
+ - Mutually exclusive with I(zones).
+ type: list
+ elements: str
+ purge_subnets:
+ description:
+ - Purge existing subnets on the ELB that are not found in I(subnets).
+ - Because it is not permitted to add multiple subnets from the same
+ availability zone, subnets to be purged will be removed before new
+ subnets are added. This may cause a brief outage if you try to replace
+ all subnets at once.
+ type: bool
+ default: false
+ scheme:
+ description:
+ - The scheme to use when creating the ELB.
+ - For a private VPC-visible ELB use C(internal).
+ - If you choose to update your scheme with a different value the ELB will be destroyed and
+ a new ELB created.
+ - Defaults to I(scheme=internet-facing).
+ type: str
+ choices: ["internal", "internet-facing"]
+ connection_draining_timeout:
+ description:
+ - Wait a specified timeout allowing connections to drain before terminating an instance.
+ - Set to C(0) to disable connection draining.
+ type: int
+ idle_timeout:
+ description:
+ - ELB connections from clients and to servers are timed out after this amount of time.
+ type: int
+ cross_az_load_balancing:
+ description:
+ - Distribute load across all configured Availability Zones.
+ - Defaults to C(false).
+ type: bool
+ stickiness:
+ description:
+ - A dictionary of stickiness policy settings.
+ - Policy will be applied to all listeners (see examples).
+ type: dict
+ suboptions:
+ type:
+ description:
+ - The type of stickiness policy to apply.
+ - Required if I(enabled=true).
+ - Ignored if I(enabled=false).
+ required: false
+ type: 'str'
+ choices: ['application','loadbalancer']
+ enabled:
+ description:
+ - When I(enabled=false) session stickiness will be disabled for all listeners.
+ required: false
+ type: bool
+ default: true
+ cookie:
+ description:
+ - The name of the application cookie used for stickiness.
+ - Required if I(enabled=true) and I(type=application).
+ - Ignored if I(enabled=false).
+ required: false
+ type: str
+ expiration:
+ description:
+ - The time period, in seconds, after which the cookie should be considered stale.
+ - If this parameter is not specified, the stickiness session lasts for the duration of the browser session.
+ - Ignored if I(enabled=false).
+ required: false
+ type: int
+ wait:
+ description:
+ - When creating, deleting, or adding instances to an ELB, if I(wait=true)
+ Ansible will wait for both the load balancer and related network interfaces
+ to finish creating/deleting.
+ - Support for waiting when adding instances was added in release 2.1.0.
+ type: bool
+ default: false
+ wait_timeout:
+ description:
+ - Used in conjunction with wait. Number of seconds to wait for the ELB to be terminated.
+ - A maximum of 600 seconds (10 minutes) is allowed.
+ type: int
+ default: 180
+ tags:
+ description:
+ - A dictionary of tags to apply to the ELB.
+ - To delete all tags supply an empty dict (C({})) and set
+ I(purge_tags=true).
+ type: dict
+ purge_tags:
+ description:
+ - Whether to remove existing tags that aren't passed in the I(tags) parameter.
+ type: bool
+ default: true
+ version_added: 2.1.0
+
+notes:
+- The ec2_elb fact currently set by this module has been deprecated and will no
+ longer be set after release 4.0.0 of the collection.
+
+extends_documentation_fragment:
+- amazon.aws.aws
+- amazon.aws.ec2
+'''
+
+EXAMPLES = """
+# Note: None of these examples set aws_access_key, aws_secret_key, or region.
+# It is assumed that their matching environment variables are set.
+
+# Basic provisioning example (non-VPC)
+
+- amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http # options are http, https, ssl, tcp
+ load_balancer_port: 80
+ instance_port: 80
+ proxy_protocol: True
+ - protocol: https
+ load_balancer_port: 443
+ instance_protocol: http # optional, defaults to value of protocol setting
+ instance_port: 80
+ # ssl certificate required for https or ssl
+ ssl_certificate_id: "arn:aws:iam::123456789012:server-certificate/company/servercerts/ProdServerCert"
+
+# Internal ELB example
+
+- amazon.aws.elb_classic_lb:
+ name: "test-vpc"
+ scheme: internal
+ state: present
+ instance_ids:
+ - i-abcd1234
+ purge_instance_ids: true
+ subnets:
+ - subnet-abcd1234
+ - subnet-1a2b3c4d
+ listeners:
+ - protocol: http # options are http, https, ssl, tcp
+ load_balancer_port: 80
+ instance_port: 80
+
+# Configure a health check and the access logs
+- amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ health_check:
+ ping_protocol: http # options are http, https, ssl, tcp
+ ping_port: 80
+ ping_path: "/index.html" # not required for tcp or ssl
+ response_timeout: 5 # seconds
+ interval: 30 # seconds
+ unhealthy_threshold: 2
+ healthy_threshold: 10
+ access_logs:
+ interval: 5 # minutes (defaults to 60)
+ s3_location: "my-bucket" # This value is required if access_logs is set
+ s3_prefix: "logs"
+
+# Ensure ELB is gone
+- amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: absent
+
+# Ensure ELB is gone and wait for check (for default timeout)
+- amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: absent
+ wait: yes
+
+# Ensure ELB is gone and wait for check with timeout value
+- amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: absent
+ wait: yes
+ wait_timeout: 600
+
+# Normally, this module will purge any listeners that exist on the ELB
+# but aren't specified in the listeners parameter. If purge_listeners is
+# false it leaves them alone
+- amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ purge_listeners: no
+
+# Normally, this module will leave availability zones that are enabled
+# on the ELB alone. If purge_zones is true, then any extraneous zones
+# will be removed
+- amazon.aws.elb_classic_lb:
+ name: "test-please-delete"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ purge_zones: yes
+
+# Creates a ELB and assigns a list of subnets to it.
+- amazon.aws.elb_classic_lb:
+ state: present
+ name: 'New ELB'
+ security_group_ids: 'sg-123456, sg-67890'
+ subnets: 'subnet-123456,subnet-67890'
+ purge_subnets: yes
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+
+# Create an ELB with connection draining, increased idle timeout and cross availability
+# zone load balancing
+- amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ connection_draining_timeout: 60
+ idle_timeout: 300
+ cross_az_load_balancing: "yes"
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+
+# Create an ELB with load balancer stickiness enabled
+- amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ stickiness:
+ type: loadbalancer
+ enabled: yes
+ expiration: 300
+
+# Create an ELB with application stickiness enabled
+- amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ stickiness:
+ type: application
+ enabled: yes
+ cookie: SESSIONID
+
+# Create an ELB and add tags
+- amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ tags:
+ Name: "New ELB"
+ stack: "production"
+ client: "Bob"
+
+# Delete all tags from an ELB
+- amazon.aws.elb_classic_lb:
+ name: "New ELB"
+ state: present
+ zones:
+ - us-east-1a
+ - us-east-1d
+ listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ tags: {}
+"""
+
+RETURN = '''
+elb:
+ description: Load Balancer attributes
+ returned: always
+ type: dict
+ contains:
+ app_cookie_policy:
+ description: The name of the policy used to control if the ELB is using a application cookie stickiness policy.
+ type: str
+ sample: ec2-elb-lb-AppCookieStickinessPolicyType
+ returned: when state is not 'absent'
+ backends:
+ description: A description of the backend policy applied to the ELB (instance-port:policy-name).
+ type: str
+ sample: 8181:ProxyProtocol-policy
+ returned: when state is not 'absent'
+ connection_draining_timeout:
+ description: The maximum time, in seconds, to keep the existing connections open before deregistering the instances.
+ type: int
+ sample: 25
+ returned: when state is not 'absent'
+ cross_az_load_balancing:
+ description: Either C('yes') if cross-AZ load balancing is enabled, or C('no') if cross-AZ load balancing is disabled.
+ type: str
+ sample: 'yes'
+ returned: when state is not 'absent'
+ dns_name:
+ description: The DNS name of the ELB.
+ type: str
+ sample: internal-ansible-test-935c585850ac-1516306744.us-east-1.elb.amazonaws.com
+ returned: when state is not 'absent'
+ health_check:
+ description: A dictionary describing the health check used for the ELB.
+ type: dict
+ returned: when state is not 'absent'
+ contains:
+ healthy_threshold:
+ description: The number of consecutive successful health checks before marking an instance as healthy.
+ type: int
+ sample: 2
+ interval:
+ description: The time, in seconds, between each health check.
+ type: int
+ sample: 10
+ target:
+ description: The Protocol, Port, and for HTTP(S) health checks the path tested by the health check.
+ type: str
+ sample: TCP:22
+ timeout:
+ description: The time, in seconds, after which an in progress health check is considered failed due to a timeout.
+ type: int
+ sample: 5
+ unhealthy_threshold:
+ description: The number of consecutive failed health checks before marking an instance as unhealthy.
+ type: int
+ sample: 2
+ hosted_zone_id:
+ description: The ID of the Amazon Route 53 hosted zone for the load balancer.
+ type: str
+ sample: Z35SXDOTRQ7X7K
+ returned: when state is not 'absent'
+ hosted_zone_name:
+ description: The DNS name of the load balancer when using a custom hostname.
+ type: str
+ sample: 'ansible-module.example'
+ returned: when state is not 'absent'
+ idle_timeout:
+ description: The length of of time before an idle connection is dropped by the ELB.
+ type: int
+ sample: 50
+ returned: when state is not 'absent'
+ in_service_count:
+ description: The number of instances attached to the ELB in an in-service state.
+ type: int
+ sample: 1
+ returned: when state is not 'absent'
+ instance_health:
+ description: A list of dictionaries describing the health of each instance attached to the ELB.
+ type: list
+ elements: dict
+ returned: when state is not 'absent'
+ contains:
+ description:
+ description: A human readable description of why the instance is not in service.
+ type: str
+ sample: N/A
+ returned: when state is not 'absent'
+ instance_id:
+ description: The ID of the instance.
+ type: str
+ sample: i-03dcc8953a03d6435
+ returned: when state is not 'absent'
+ reason_code:
+ description: A code describing why the instance is not in service.
+ type: str
+ sample: N/A
+ returned: when state is not 'absent'
+ state:
+ description: The current service state of the instance.
+ type: str
+ sample: InService
+ returned: when state is not 'absent'
+ instances:
+ description: A list of the IDs of instances attached to the ELB.
+ type: list
+ elements: str
+ sample: ['i-03dcc8953a03d6435']
+ returned: when state is not 'absent'
+ lb_cookie_policy:
+ description: The name of the policy used to control if the ELB is using a cookie stickiness policy.
+ type: str
+ sample: ec2-elb-lb-LBCookieStickinessPolicyType
+ returned: when state is not 'absent'
+ listeners:
+ description:
+ - A list of lists describing the listeners attached to the ELB.
+ - The nested list contains the listener port, the instance port, the listener protoco, the instance port,
+ and where appropriate the ID of the SSL certificate for the port.
+ type: list
+ elements: list
+ sample: [[22, 22, 'TCP', 'TCP'], [80, 8181, 'HTTP', 'HTTP']]
+ returned: when state is not 'absent'
+ name:
+ description: The name of the ELB. This name is unique per-region, per-account.
+ type: str
+ sample: ansible-test-935c585850ac
+ returned: when state is not 'absent'
+ out_of_service_count:
+ description: The number of instances attached to the ELB in an out-of-service state.
+ type: int
+ sample: 0
+ returned: when state is not 'absent'
+ proxy_policy:
+ description: The name of the policy used to control if the ELB operates using the Proxy protocol.
+ type: str
+ sample: ProxyProtocol-policy
+ returned: when the proxy protocol policy exists.
+ region:
+ description: The AWS region in which the ELB is running.
+ type: str
+ sample: us-east-1
+ returned: always
+ scheme:
+ description: Whether the ELB is an C('internal') or a C('internet-facing') load balancer.
+ type: str
+ sample: internal
+ returned: when state is not 'absent'
+ security_group_ids:
+ description: A list of the IDs of the Security Groups attached to the ELB.
+ type: list
+ elements: str
+ sample: ['sg-0c12ebd82f2fb97dc', 'sg-01ec7378d0c7342e6']
+ returned: when state is not 'absent'
+ status:
+ description: A minimal description of the current state of the ELB. Valid values are C('exists'), C('gone'), C('deleted'), C('created').
+ type: str
+ sample: exists
+ returned: always
+ subnets:
+ description: A list of the subnet IDs attached to the ELB.
+ type: list
+ elements: str
+ sample: ['subnet-00d9d0f70c7e5f63c', 'subnet-03fa5253586b2d2d5']
+ returned: when state is not 'absent'
+ tags:
+ description: A dictionary describing the tags attached to the ELB.
+ type: dict
+ sample: {'Name': 'ansible-test-935c585850ac', 'ExampleTag': 'Example Value'}
+ returned: when state is not 'absent'
+ unknown_instance_state_count:
+ description: The number of instances attached to the ELB in an unknown state.
+ type: int
+ sample: 0
+ returned: when state is not 'absent'
+ zones:
+ description: A list of the AWS regions in which the ELB is running.
+ type: list
+ elements: str
+ sample: ['us-east-1b', 'us-east-1a']
+ returned: when state is not 'absent'
+'''
+
+try:
+ import botocore
+except ImportError:
+ pass # Taken care of by AnsibleAWSModule
+
+from ..module_utils.core import AnsibleAWSModule
+from ..module_utils.core import is_boto3_error_code
+from ..module_utils.core import scrub_none_parameters
+from ..module_utils.ec2 import AWSRetry
+from ..module_utils.ec2 import ansible_dict_to_boto3_filter_list
+from ..module_utils.ec2 import ansible_dict_to_boto3_tag_list
+from ..module_utils.ec2 import boto3_tag_list_to_ansible_dict
+from ..module_utils.ec2 import camel_dict_to_snake_dict
+from ..module_utils.ec2 import compare_aws_tags
+from ..module_utils.ec2 import snake_dict_to_camel_dict
+
+from ..module_utils.ec2 import get_ec2_security_group_ids_from_names
+from ..module_utils.waiters import get_waiter
+
+
+class ElbManager(object):
+ """Handles ELB creation and destruction"""
+
+ def __init__(self, module):
+
+ self.module = module
+
+ self.name = module.params['name']
+ self.listeners = module.params['listeners']
+ self.purge_listeners = module.params['purge_listeners']
+ self.instance_ids = module.params['instance_ids']
+ self.purge_instance_ids = module.params['purge_instance_ids']
+ self.zones = module.params['zones']
+ self.purge_zones = module.params['purge_zones']
+ self.health_check = module.params['health_check']
+ self.access_logs = module.params['access_logs']
+ self.subnets = module.params['subnets']
+ self.purge_subnets = module.params['purge_subnets']
+ self.scheme = module.params['scheme']
+ self.connection_draining_timeout = module.params['connection_draining_timeout']
+ self.idle_timeout = module.params['idle_timeout']
+ self.cross_az_load_balancing = module.params['cross_az_load_balancing']
+ self.stickiness = module.params['stickiness']
+ self.wait = module.params['wait']
+ self.wait_timeout = module.params['wait_timeout']
+ self.tags = module.params['tags']
+ self.purge_tags = module.params['purge_tags']
+
+ self.changed = False
+ self.status = 'gone'
+
+ retry_decorator = AWSRetry.jittered_backoff()
+ self.client = self.module.client('elb', retry_decorator=retry_decorator)
+ self.ec2_client = self.module.client('ec2', retry_decorator=retry_decorator)
+
+ security_group_names = module.params['security_group_names']
+ self.security_group_ids = module.params['security_group_ids']
+
+ self._update_descriptions()
+
+ if security_group_names:
+ # Use the subnets attached to the VPC to find which VPC we're in and
+ # limit the search
+ if self.elb.get('Subnets'):
+ subnets = set(self.elb.get('Subnets') + list(self.subnets or []))
+ else:
+ subnets = set(self.subnets)
+ if subnets:
+ vpc_id = self._get_vpc_from_subnets(subnets)
+ else:
+ vpc_id = None
+ try:
+ self.security_group_ids = self._get_ec2_security_group_ids_from_names(
+ sec_group_list=security_group_names, vpc_id=vpc_id)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to convert security group names to IDs, try using security group IDs rather than names")
+
+ def _update_descriptions(self):
+ try:
+ self.elb = self._get_elb()
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self.module.fail_json_aws(e, msg='Unable to describe load balancer')
+ try:
+ self.elb_attributes = self._get_elb_attributes()
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self.module.fail_json_aws(e, msg='Unable to describe load balancer attributes')
+ try:
+ self.elb_policies = self._get_elb_policies()
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self.module.fail_json_aws(e, msg='Unable to describe load balancer policies')
+ try:
+ self.elb_health = self._get_elb_instance_health()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg='Unable to describe load balancer instance health')
+
+ # We have a number of complex parameters which can't be validated by
+ # AnsibleModule or are only required if the ELB doesn't exist.
+ def validate_params(self, state=None):
+ problem_found = False
+ # Validate that protocol is one of the permitted values
+ problem_found |= self._validate_listeners(self.listeners)
+ problem_found |= self._validate_health_check(self.health_check)
+ problem_found |= self._validate_stickiness(self.stickiness)
+ if state == 'present':
+ # When creating a new ELB
+ problem_found |= self._validate_creation_requirements()
+ problem_found |= self._validate_access_logs(self.access_logs)
+
+ # Pass check_mode down through to the module
+ @property
+ def check_mode(self):
+ return self.module.check_mode
+
+ def _get_elb_policies(self):
+ try:
+ attributes = self.client.describe_load_balancer_policies(LoadBalancerName=self.name)
+ except is_boto3_error_code(['LoadBalancerNotFound', 'LoadBalancerAttributeNotFoundException']):
+ return {}
+ except is_boto3_error_code('AccessDenied'): # pylint: disable=duplicate-except
+ # Be forgiving if we can't see the attributes
+ # Note: This will break idempotency if someone has set but not describe
+ self.module.warn('Access Denied trying to describe load balancer policies')
+ return {}
+ return attributes['PolicyDescriptions']
+
+ def _get_elb_instance_health(self):
+ try:
+ instance_health = self.client.describe_instance_health(LoadBalancerName=self.name)
+ except is_boto3_error_code(['LoadBalancerNotFound', 'LoadBalancerAttributeNotFoundException']):
+ return []
+ except is_boto3_error_code('AccessDenied'): # pylint: disable=duplicate-except
+ # Be forgiving if we can't see the attributes
+ # Note: This will break idempotency if someone has set but not describe
+ self.module.warn('Access Denied trying to describe instance health')
+ return []
+ return instance_health['InstanceStates']
+
+ def _get_elb_attributes(self):
+ try:
+ attributes = self.client.describe_load_balancer_attributes(LoadBalancerName=self.name)
+ except is_boto3_error_code(['LoadBalancerNotFound', 'LoadBalancerAttributeNotFoundException']):
+ return {}
+ except is_boto3_error_code('AccessDenied'): # pylint: disable=duplicate-except
+ # Be forgiving if we can't see the attributes
+ # Note: This will break idempotency if someone has set but not describe
+ self.module.warn('Access Denied trying to describe load balancer attributes')
+ return {}
+ return attributes['LoadBalancerAttributes']
+
+ def _get_elb(self):
+ try:
+ elbs = self._describe_loadbalancer(self.name)
+ except is_boto3_error_code('LoadBalancerNotFound'):
+ return None
+
+ # Shouldn't happen, but Amazon could change the rules on us...
+ if len(elbs) > 1:
+ self.module.fail_json('Found multiple ELBs with name {0}'.format(self.name))
+
+ self.status = 'exists' if self.status == 'gone' else self.status
+
+ return elbs[0]
+
+ def _delete_elb(self):
+ # True if succeeds, exception raised if not
+ try:
+ if not self.check_mode:
+ self.client.delete_load_balancer(aws_retry=True, LoadBalancerName=self.name)
+ self.changed = True
+ self.status = 'deleted'
+ except is_boto3_error_code('LoadBalancerNotFound'):
+ return False
+ return True
+
+ def _create_elb(self):
+ listeners = list(self._format_listener(l) for l in self.listeners)
+ if not self.scheme:
+ self.scheme = 'internet-facing'
+ params = dict(
+ LoadBalancerName=self.name,
+ AvailabilityZones=self.zones,
+ SecurityGroups=self.security_group_ids,
+ Subnets=self.subnets,
+ Listeners=listeners,
+ Scheme=self.scheme)
+ params = scrub_none_parameters(params)
+ if self.tags:
+ params['Tags'] = ansible_dict_to_boto3_tag_list(self.tags)
+
+ if not self.check_mode:
+ self.client.create_load_balancer(aws_retry=True, **params)
+ # create_load_balancer only returns the DNS name
+ self.elb = self._get_elb()
+ self.changed = True
+ self.status = 'created'
+ return True
+
+ def _format_listener(self, listener, inject_protocol=False):
+ """Formats listener into the format needed by the
+ ELB API"""
+
+ listener = scrub_none_parameters(listener)
+
+ for protocol in ['protocol', 'instance_protocol']:
+ if protocol in listener:
+ listener[protocol] = listener[protocol].upper()
+
+ if inject_protocol and 'instance_protocol' not in listener:
+ listener['instance_protocol'] = listener['protocol']
+
+ # Remove proxy_protocol, it has to be handled as a policy
+ listener.pop('proxy_protocol', None)
+
+ ssl_id = listener.pop('ssl_certificate_id', None)
+
+ formatted_listener = snake_dict_to_camel_dict(listener, True)
+ if ssl_id:
+ formatted_listener['SSLCertificateId'] = ssl_id
+
+ return snake_dict_to_camel_dict(listener, True)
+
+ def _format_healthcheck_target(self):
+ """Compose target string from healthcheck parameters"""
+ protocol = self.health_check['ping_protocol'].upper()
+ path = ""
+
+ if protocol in ['HTTP', 'HTTPS'] and 'ping_path' in self.health_check:
+ path = self.health_check['ping_path']
+
+ return "%s:%s%s" % (protocol, self.health_check['ping_port'], path)
+
+ def _format_healthcheck(self):
+ return dict(
+ Target=self._format_healthcheck_target(),
+ Timeout=self.health_check['timeout'],
+ Interval=self.health_check['interval'],
+ UnhealthyThreshold=self.health_check['unhealthy_threshold'],
+ HealthyThreshold=self.health_check['healthy_threshold'],
+ )
+
+ def ensure_ok(self):
+ """Create the ELB"""
+ if not self.elb:
+ try:
+ self._create_elb()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to create load balancer")
+ try:
+ self.elb_attributes = self._get_elb_attributes()
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self.module.fail_json_aws(e, msg='Unable to describe load balancer attributes')
+ self._wait_created()
+
+ # Some attributes are configured on creation, others need to be updated
+ # after creation. Skip updates for those set on creation
+ else:
+ if self._check_scheme():
+ # XXX We should probably set 'None' parameters based on the
+ # current state prior to deletion
+
+ # the only way to change the scheme is by recreating the resource
+ self.ensure_gone()
+ # We need to wait for it to be gone-gone
+ self._wait_gone(True)
+ try:
+ self._create_elb()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to recreate load balancer")
+ try:
+ self.elb_attributes = self._get_elb_attributes()
+ except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
+ self.module.fail_json_aws(e, msg='Unable to describe load balancer attributes')
+ else:
+ self._set_subnets()
+ self._set_zones()
+ self._set_security_groups()
+ self._set_elb_listeners()
+ self._set_tags()
+
+ self._set_health_check()
+ self._set_elb_attributes()
+ self._set_backend_policies()
+ self._set_stickiness_policies()
+ self._set_instance_ids()
+
+# if self._check_attribute_support('access_log'):
+# self._set_access_log()
+
+ def ensure_gone(self):
+ """Destroy the ELB"""
+ if self.elb:
+ try:
+ self._delete_elb()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to delete load balancer")
+ self._wait_gone()
+
+ def _wait_gone(self, wait=None):
+ if not wait and not self.wait:
+ return
+ try:
+ elb_removed = self._wait_for_elb_removed()
+ # Unfortunately even though the ELB itself is removed quickly
+ # the interfaces take longer so reliant security groups cannot
+ # be deleted until the interface has registered as removed.
+ elb_interface_removed = self._wait_for_elb_interface_removed()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed while waiting for load balancer deletion")
+
+ def _wait_created(self, wait=False):
+ if not wait and not self.wait:
+ return
+ try:
+ self._wait_for_elb_created()
+ # Can take longer than creation
+ self._wait_for_elb_interface_created()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed while waiting for load balancer deletion")
+
+ def get_load_balancer(self):
+ self._update_descriptions()
+ elb = dict(self.elb or {})
+ if not elb:
+ return {}
+
+ elb['LoadBalancerAttributes'] = self.elb_attributes
+ elb['LoadBalancerPolicies'] = self.elb_policies
+ load_balancer = camel_dict_to_snake_dict(elb)
+ try:
+ load_balancer['tags'] = self._get_tags()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to get load balancer tags")
+
+ return load_balancer
+
+ def get_info(self):
+ self._update_descriptions()
+
+ if not self.elb:
+ return dict(
+ name=self.name,
+ status=self.status,
+ region=self.module.region
+ )
+ check_elb = dict(self.elb)
+ check_elb_attrs = dict(self.elb_attributes or {})
+ check_policies = check_elb.get('Policies', {})
+ try:
+ lb_cookie_policy = check_policies['LBCookieStickinessPolicies'][0]['PolicyName']
+ except (KeyError, IndexError):
+ lb_cookie_policy = None
+ try:
+ app_cookie_policy = check_policies['AppCookieStickinessPolicies'][0]['PolicyName']
+ except (KeyError, IndexError):
+ app_cookie_policy = None
+
+ health_check = camel_dict_to_snake_dict(check_elb.get('HealthCheck', {}))
+
+ backend_policies = list()
+ for port, policies in self._get_backend_policies().items():
+ for policy in policies:
+ backend_policies.append("{0}:{1}".format(port, policy))
+
+ info = dict(
+ name=check_elb.get('LoadBalancerName'),
+ dns_name=check_elb.get('DNSName'),
+ zones=check_elb.get('AvailabilityZones'),
+ security_group_ids=check_elb.get('SecurityGroups'),
+ status=self.status,
+ subnets=check_elb.get('Subnets'),
+ scheme=check_elb.get('Scheme'),
+ hosted_zone_name=check_elb.get('CanonicalHostedZoneName'),
+ hosted_zone_id=check_elb.get('CanonicalHostedZoneNameID'),
+ lb_cookie_policy=lb_cookie_policy,
+ app_cookie_policy=app_cookie_policy,
+ proxy_policy=self._get_proxy_protocol_policy(),
+ backends=backend_policies,
+ instances=self._get_instance_ids(),
+ out_of_service_count=0,
+ in_service_count=0,
+ unknown_instance_state_count=0,
+ region=self.module.region,
+ health_check=health_check,
+ )
+
+ instance_health = camel_dict_to_snake_dict(dict(InstanceHealth=self.elb_health))
+ info.update(instance_health)
+
+ # instance state counts: InService or OutOfService
+ if info['instance_health']:
+ for instance_state in info['instance_health']:
+ if instance_state['state'] == "InService":
+ info['in_service_count'] += 1
+ elif instance_state['state'] == "OutOfService":
+ info['out_of_service_count'] += 1
+ else:
+ info['unknown_instance_state_count'] += 1
+
+ listeners = check_elb.get('ListenerDescriptions', [])
+ if listeners:
+ info['listeners'] = list(
+ self._api_listener_as_tuple(l['Listener']) for l in listeners
+ )
+ else:
+ info['listeners'] = []
+
+ try:
+ info['connection_draining_timeout'] = check_elb_attrs['ConnectionDraining']['Timeout']
+ except KeyError:
+ pass
+ try:
+ info['idle_timeout'] = check_elb_attrs['ConnectionSettings']['IdleTimeout']
+ except KeyError:
+ pass
+ try:
+ is_enabled = check_elb_attrs['CrossZoneLoadBalancing']['Enabled']
+ info['cross_az_load_balancing'] = 'yes' if is_enabled else 'no'
+ except KeyError:
+ pass
+
+ # # return stickiness info?
+
+ try:
+ info['tags'] = self._get_tags()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to get load balancer tags")
+
+ return info
+
+ @property
+ def _waiter_config(self):
+ delay = min(10, self.wait_timeout)
+ max_attempts = (self.wait_timeout // delay)
+ return {'Delay': delay, 'MaxAttempts': max_attempts}
+
+ def _wait_for_elb_created(self):
+ if self.check_mode:
+ return True
+
+ waiter = get_waiter(self.client, 'load_balancer_created')
+
+ try:
+ waiter.wait(
+ WaiterConfig=self._waiter_config,
+ LoadBalancerNames=[self.name],
+ )
+ except botocore.exceptions.WaiterError as e:
+ self.module.fail_json_aws(e, 'Timeout waiting for ELB removal')
+
+ return True
+
+ def _wait_for_elb_interface_created(self):
+ if self.check_mode:
+ return True
+ waiter = get_waiter(self.ec2_client, 'network_interface_available')
+
+ filters = ansible_dict_to_boto3_filter_list(
+ {'requester-id': 'amazon-elb',
+ 'description': 'ELB {0}'.format(self.name)}
+ )
+
+ try:
+ waiter.wait(
+ WaiterConfig=self._waiter_config,
+ Filters=filters,
+ )
+ except botocore.exceptions.WaiterError as e:
+ self.module.fail_json_aws(e, 'Timeout waiting for ELB Interface removal')
+
+ return True
+
+ def _wait_for_elb_removed(self):
+ if self.check_mode:
+ return True
+
+ waiter = get_waiter(self.client, 'load_balancer_deleted')
+
+ try:
+ waiter.wait(
+ WaiterConfig=self._waiter_config,
+ LoadBalancerNames=[self.name],
+ )
+ except botocore.exceptions.WaiterError as e:
+ self.module.fail_json_aws(e, 'Timeout waiting for ELB removal')
+
+ return True
+
+ def _wait_for_elb_interface_removed(self):
+ if self.check_mode:
+ return True
+
+ waiter = get_waiter(self.ec2_client, 'network_interface_deleted')
+
+ filters = ansible_dict_to_boto3_filter_list(
+ {'requester-id': 'amazon-elb',
+ 'description': 'ELB {0}'.format(self.name)}
+ )
+
+ try:
+ waiter.wait(
+ WaiterConfig=self._waiter_config,
+ Filters=filters,
+ )
+ except botocore.exceptions.WaiterError as e:
+ self.module.fail_json_aws(e, 'Timeout waiting for ELB Interface removal')
+
+ return True
+
+ def _wait_for_instance_state(self, waiter_name, instances):
+ if not instances:
+ return False
+
+ if self.check_mode:
+ return True
+
+ waiter = get_waiter(self.client, waiter_name)
+
+ instance_list = list(dict(InstanceId=instance) for instance in instances)
+
+ try:
+ waiter.wait(
+ WaiterConfig=self._waiter_config,
+ LoadBalancerName=self.name,
+ Instances=instance_list,
+ )
+ except botocore.exceptions.WaiterError as e:
+ self.module.fail_json_aws(e, 'Timeout waiting for ELB Instance State')
+
+ return True
+
+ def _create_elb_listeners(self, listeners):
+ """Takes a list of listener definitions and creates them"""
+ if not listeners:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ self.client.create_load_balancer_listeners(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ Listeners=listeners,
+ )
+ return True
+
+ def _delete_elb_listeners(self, ports):
+ """Takes a list of listener ports and deletes them from the ELB"""
+ if not ports:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ self.client.delete_load_balancer_listeners(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ LoadBalancerPorts=ports,
+ )
+ return True
+
+ def _set_elb_listeners(self):
+ """
+ Creates listeners specified by self.listeners; overwrites existing
+ listeners on these ports; removes extraneous listeners
+ """
+
+ if not self.listeners:
+ return False
+
+ # We can't use sets here: dicts aren't hashable, so convert to the boto3
+ # format and use a generator to filter
+ new_listeners = list(self._format_listener(l, True) for l in self.listeners)
+ existing_listeners = list(l['Listener'] for l in self.elb['ListenerDescriptions'])
+ listeners_to_remove = list(l for l in existing_listeners if l not in new_listeners)
+ listeners_to_add = list(l for l in new_listeners if l not in existing_listeners)
+
+ changed = False
+
+ if self.purge_listeners:
+ ports_to_remove = list(l['LoadBalancerPort'] for l in listeners_to_remove)
+ else:
+ old_ports = set(l['LoadBalancerPort'] for l in listeners_to_remove)
+ new_ports = set(l['LoadBalancerPort'] for l in listeners_to_add)
+ # If we're not purging, then we need to remove Listeners
+ # where the full definition doesn't match, but the port does
+ ports_to_remove = list(old_ports & new_ports)
+
+ # Update is a delete then add, so do the deletion first
+ try:
+ changed |= self._delete_elb_listeners(ports_to_remove)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to remove listeners from load balancer")
+ try:
+ changed |= self._create_elb_listeners(listeners_to_add)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to remove listeners from load balancer")
+
+ return changed
+
+ def _api_listener_as_tuple(self, listener):
+ """Adds ssl_certificate_id to ELB API tuple if present"""
+ base_tuple = [
+ listener.get('LoadBalancerPort'),
+ listener.get('InstancePort'),
+ listener.get('Protocol'),
+ listener.get('InstanceProtocol'),
+ ]
+ if listener.get('SSLCertificateId', False):
+ base_tuple.append(listener.get('SSLCertificateId'))
+ return tuple(base_tuple)
+
+ def _attach_subnets(self, subnets):
+ if not subnets:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+ self.client.attach_load_balancer_to_subnets(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ Subnets=subnets)
+ return True
+
+ def _detach_subnets(self, subnets):
+ if not subnets:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+ self.client.detach_load_balancer_from_subnets(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ Subnets=subnets)
+ return True
+
+ def _set_subnets(self):
+ """Determine which subnets need to be attached or detached on the ELB"""
+ # Subnets parameter not set, nothing to change
+ if self.subnets is None:
+ return False
+
+ changed = False
+
+ if self.purge_subnets:
+ subnets_to_detach = list(set(self.elb['Subnets']) - set(self.subnets))
+ else:
+ subnets_to_detach = list()
+ subnets_to_attach = list(set(self.subnets) - set(self.elb['Subnets']))
+
+ # You can't add multiple subnets from the same AZ. Remove first, then
+ # add.
+ try:
+ changed |= self._detach_subnets(subnets_to_detach)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to detach subnets from load balancer")
+ try:
+ changed |= self._attach_subnets(subnets_to_attach)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to attach subnets to load balancer")
+
+ return changed
+
+ def _check_scheme(self):
+ """Determine if the current scheme is different than the scheme of the ELB"""
+ if self.scheme:
+ if self.elb['Scheme'] != self.scheme:
+ return True
+ return False
+
+ def _enable_zones(self, zones):
+ if not zones:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.enable_availability_zones_for_load_balancer(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ AvailabilityZones=zones,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg='Failed to enable zones for load balancer')
+ return True
+
+ def _disable_zones(self, zones):
+ if not zones:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.disable_availability_zones_for_load_balancer(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ AvailabilityZones=zones,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg='Failed to disable zones for load balancer')
+ return True
+
+ def _set_zones(self):
+ """Determine which zones need to be enabled or disabled on the ELB"""
+ # zones parameter not set, nothing to changeA
+ if self.zones is None:
+ return False
+
+ changed = False
+
+ if self.purge_zones:
+ zones_to_disable = list(set(self.elb['AvailabilityZones']) - set(self.zones))
+ else:
+ zones_to_disable = list()
+ zones_to_enable = list(set(self.zones) - set(self.elb['AvailabilityZones']))
+
+ # Add before we remove to reduce the chance of an outage if someone
+ # replaces all zones at once
+ try:
+ changed |= self._enable_zones(zones_to_enable)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to enable zone on load balancer")
+ try:
+ changed |= self._disable_zones(zones_to_disable)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to attach zone to load balancer")
+
+ return changed
+
+ def _set_security_groups(self):
+ if not self.security_group_ids:
+ return False
+ # Security Group Names should already by converted to IDs by this point.
+ if set(self.elb['SecurityGroups']) == set(self.security_group_ids):
+ return False
+
+ self.changed = True
+
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.apply_security_groups_to_load_balancer(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ SecurityGroups=self.security_group_ids,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to apply security groups to load balancer")
+ return True
+
+ def _set_health_check(self):
+ if not self.health_check:
+ return False
+
+ """Set health check values on ELB as needed"""
+ health_check_config = self._format_healthcheck()
+
+ if health_check_config == self.elb['HealthCheck']:
+ return False
+
+ self.changed = True
+ if self.check_mode:
+ return True
+ try:
+ self.client.configure_health_check(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ HealthCheck=health_check_config,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to apply healthcheck to load balancer")
+
+ return True
+
+ def _set_elb_attributes(self):
+ attributes = {}
+ if self.cross_az_load_balancing is not None:
+ attr = dict(Enabled=self.cross_az_load_balancing)
+ if not self.elb_attributes.get('CrossZoneLoadBalancing', None) == attr:
+ attributes['CrossZoneLoadBalancing'] = attr
+
+ if self.idle_timeout is not None:
+ attr = dict(IdleTimeout=self.idle_timeout)
+ if not self.elb_attributes.get('ConnectionSettings', None) == attr:
+ attributes['ConnectionSettings'] = attr
+
+ if self.connection_draining_timeout is not None:
+ curr_attr = dict(self.elb_attributes.get('ConnectionDraining', {}))
+ if self.connection_draining_timeout == 0:
+ attr = dict(Enabled=False)
+ curr_attr.pop('Timeout', None)
+ else:
+ attr = dict(Enabled=True, Timeout=self.connection_draining_timeout)
+ if not curr_attr == attr:
+ attributes['ConnectionDraining'] = attr
+
+ if self.access_logs is not None:
+ curr_attr = dict(self.elb_attributes.get('AccessLog', {}))
+ # For disabling we only need to compare and pass 'Enabled'
+ if not self.access_logs.get('enabled'):
+ curr_attr = dict(Enabled=curr_attr.get('Enabled', False))
+ attr = dict(Enabled=self.access_logs.get('enabled'))
+ else:
+ attr = dict(
+ Enabled=True,
+ S3BucketName=self.access_logs['s3_location'],
+ S3BucketPrefix=self.access_logs.get('s3_prefix', ''),
+ EmitInterval=self.access_logs.get('interval', 60),
+ )
+ if not curr_attr == attr:
+ attributes['AccessLog'] = attr
+
+ if not attributes:
+ return False
+
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.modify_load_balancer_attributes(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ LoadBalancerAttributes=attributes
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to apply load balancer attrbutes")
+
+ def _proxy_policy_name(self):
+ return 'ProxyProtocol-policy'
+
+ def _policy_name(self, policy_type):
+ return 'ec2-elb-lb-{0}'.format(policy_type)
+
+ def _get_listener_policies(self):
+ """Get a list of listener policies mapped to the LoadBalancerPort"""
+ if not self.elb:
+ return {}
+ listener_descriptions = self.elb.get('ListenerDescriptions', [])
+ policies = {l['LoadBalancerPort']: l['PolicyNames'] for l in listener_descriptions}
+ return policies
+
+ def _set_listener_policies(self, port, policies):
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.set_load_balancer_policies_of_listener(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ LoadBalancerPort=port,
+ PolicyNames=list(policies),
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to set load balancer listener policies",
+ port=port, policies=policies)
+
+ return True
+
+ def _get_stickiness_policies(self):
+ """Get a list of AppCookieStickinessPolicyType and LBCookieStickinessPolicyType policies"""
+ return list(p['PolicyName'] for p in self.elb_policies if p['PolicyTypeName'] in ['AppCookieStickinessPolicyType', 'LBCookieStickinessPolicyType'])
+
+ def _get_app_stickness_policy_map(self):
+ """Get a mapping of App Cookie Stickiness policy names to their definitions"""
+ policies = self.elb.get('Policies', {}).get('AppCookieStickinessPolicies', [])
+ return {p['PolicyName']: p for p in policies}
+
+ def _get_lb_stickness_policy_map(self):
+ """Get a mapping of LB Cookie Stickiness policy names to their definitions"""
+ policies = self.elb.get('Policies', {}).get('LBCookieStickinessPolicies', [])
+ return {p['PolicyName']: p for p in policies}
+
+ def _purge_stickiness_policies(self):
+ """Removes all stickiness policies from all Load Balancers"""
+ # Used when purging stickiness policies or updating a policy (you can't
+ # update a policy while it's connected to a Listener)
+ stickiness_policies = set(self._get_stickiness_policies())
+ listeners = self.elb['ListenerDescriptions']
+ changed = False
+ for listener in listeners:
+ port = listener['Listener']['LoadBalancerPort']
+ policies = set(listener['PolicyNames'])
+ new_policies = set(policies - stickiness_policies)
+ if policies != new_policies:
+ changed |= self._set_listener_policies(port, new_policies)
+
+ return changed
+
+ def _set_stickiness_policies(self):
+ if self.stickiness is None:
+ return False
+
+ # Make sure that the list of policies and listeners is up to date, we're
+ # going to make changes to all listeners
+ self._update_descriptions()
+
+ if not self.stickiness['enabled']:
+ return self._purge_stickiness_policies()
+
+ if self.stickiness['type'] == 'loadbalancer':
+ policy_name = self._policy_name('LBCookieStickinessPolicyType')
+ expiration = self.stickiness.get('expiration')
+ if not expiration:
+ expiration = 0
+ policy_description = dict(
+ PolicyName=policy_name,
+ CookieExpirationPeriod=expiration,
+ )
+ existing_policies = self._get_lb_stickness_policy_map()
+ add_method = self.client.create_lb_cookie_stickiness_policy
+ elif self.stickiness['type'] == 'application':
+ policy_name = self._policy_name('AppCookieStickinessPolicyType')
+ policy_description = dict(
+ PolicyName=policy_name,
+ CookieName=self.stickiness.get('cookie', 0)
+ )
+ existing_policies = self._get_app_stickness_policy_map()
+ add_method = self.client.create_app_cookie_stickiness_policy
+ else:
+ # We shouldn't get here...
+ self.module.fail_json(
+ msg='Unknown stickiness policy {0}'.format(
+ self.stickiness['type']
+ )
+ )
+
+ changed = False
+ # To update a policy we need to delete then re-add, and we can only
+ # delete if the policy isn't attached to a listener
+ if policy_name in existing_policies:
+ if existing_policies[policy_name] != policy_description:
+ changed |= self._purge_stickiness_policies()
+
+ if changed:
+ self._update_descriptions()
+
+ changed |= self._set_stickiness_policy(
+ method=add_method,
+ description=policy_description,
+ existing_policies=existing_policies,
+ )
+
+ listeners = self.elb['ListenerDescriptions']
+ for listener in listeners:
+ changed |= self._set_lb_stickiness_policy(
+ listener=listener,
+ policy=policy_name
+ )
+ return changed
+
+ def _delete_loadbalancer_policy(self, policy_name):
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.delete_load_balancer_policy(
+ LoadBalancerName=self.name,
+ PolicyName=policy_name,
+ )
+ except is_boto3_error_code('InvalidConfigurationRequest'):
+ # Already deleted
+ return False
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ self.module.fail_json_aws(e, msg="Failed to load balancer policy {0}".format(policy_name))
+ return True
+
+ def _set_stickiness_policy(self, method, description, existing_policies=None):
+ changed = False
+ if existing_policies:
+ policy_name = description['PolicyName']
+ if policy_name in existing_policies:
+ if existing_policies[policy_name] == description:
+ return False
+ if existing_policies[policy_name] != description:
+ changed |= self._delete_loadbalancer_policy(policy_name)
+
+ self.changed = True
+ changed = True
+
+ if self.check_mode:
+ return changed
+
+ # This needs to be in place for comparisons, but not passed to the
+ # method.
+ if not description.get('CookieExpirationPeriod', None):
+ description.pop('CookieExpirationPeriod', None)
+
+ try:
+ method(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ **description
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to create load balancer stickiness policy",
+ description=description)
+ return changed
+
+ def _set_lb_stickiness_policy(self, listener, policy):
+ port = listener['Listener']['LoadBalancerPort']
+ stickiness_policies = set(self._get_stickiness_policies())
+ changed = False
+
+ policies = set(listener['PolicyNames'])
+ new_policies = list(policies - stickiness_policies)
+ new_policies.append(policy)
+
+ if policies != set(new_policies):
+ changed |= self._set_listener_policies(port, new_policies)
+
+ return changed
+
+ def _get_backend_policies(self):
+ """Get a list of backend policies mapped to the InstancePort"""
+ if not self.elb:
+ return {}
+ server_descriptions = self.elb.get('BackendServerDescriptions', [])
+ policies = {b['InstancePort']: b['PolicyNames'] for b in server_descriptions}
+ return policies
+
+ def _get_proxy_protocol_policy(self):
+ """Returns the name of the name of the ProxyPolicy if created"""
+ all_proxy_policies = self._get_proxy_policies()
+ if not all_proxy_policies:
+ return None
+ if len(all_proxy_policies) == 1:
+ return all_proxy_policies[0]
+ return all_proxy_policies
+
+ def _get_proxy_policies(self):
+ """Get a list of ProxyProtocolPolicyType policies"""
+ return list(p['PolicyName'] for p in self.elb_policies if p['PolicyTypeName'] == 'ProxyProtocolPolicyType')
+
+ def _get_policy_map(self):
+ """Get a mapping of Policy names to their definitions"""
+ return {p['PolicyName']: p for p in self.elb_policies}
+
+ def _set_backend_policies(self):
+ """Sets policies for all backends"""
+ # Currently only supports setting ProxyProtocol policies
+ if not self.listeners:
+ return False
+
+ ensure_proxy_protocol = False
+ backend_policies = self._get_backend_policies()
+ proxy_policies = set(self._get_proxy_policies())
+
+ proxy_ports = dict()
+ for listener in self.listeners:
+ proxy_protocol = listener.get('proxy_protocol', None)
+ # Only look at the listeners for which proxy_protocol is defined
+ if proxy_protocol is None:
+ next
+ instance_port = listener.get('instance_port')
+ if proxy_ports.get(instance_port, None) is not None:
+ if proxy_ports[instance_port] != proxy_protocol:
+ self.module.fail_json_aws(
+ 'proxy_protocol set to conflicting values for listeners'
+ ' on port {0}'.format(instance_port))
+ proxy_ports[instance_port] = proxy_protocol
+
+ if not proxy_ports:
+ return False
+
+ changed = False
+
+ # If anyone's set proxy_protocol to true, make sure we have our policy
+ # in place.
+ proxy_policy_name = self._proxy_policy_name()
+ if any(proxy_ports.values()):
+ changed |= self._set_proxy_protocol_policy(proxy_policy_name)
+
+ for port in proxy_ports:
+ current_policies = set(backend_policies.get(port, []))
+ new_policies = list(current_policies - proxy_policies)
+ if proxy_ports[port]:
+ new_policies.append(proxy_policy_name)
+
+ changed |= self._set_backend_policy(port, new_policies)
+
+ return changed
+
+ def _set_backend_policy(self, port, policies):
+ backend_policies = self._get_backend_policies()
+ current_policies = set(backend_policies.get(port, []))
+
+ if current_policies == set(policies):
+ return False
+
+ self.changed = True
+
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.set_load_balancer_policies_for_backend_server(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ InstancePort=port,
+ PolicyNames=policies,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to set load balancer backend policies",
+ port=port, policies=policies)
+
+ return True
+
+ def _set_proxy_protocol_policy(self, policy_name):
+ """Install a proxy protocol policy if needed"""
+ policy_map = self._get_policy_map()
+
+ policy_attributes = [dict(AttributeName='ProxyProtocol', AttributeValue='true')]
+
+ proxy_policy = dict(
+ PolicyName=policy_name,
+ PolicyTypeName='ProxyProtocolPolicyType',
+ PolicyAttributeDescriptions=policy_attributes,
+ )
+
+ existing_policy = policy_map.get(policy_name)
+ if proxy_policy == existing_policy:
+ return False
+
+ if existing_policy is not None:
+ self.module.fail_json(
+ msg="Unable to configure ProxyProtocol policy. "
+ "Policy with name {0} already exists and doesn't match.".format(policy_name),
+ policy=proxy_policy, existing_policy=existing_policy,
+ )
+
+ proxy_policy['PolicyAttributes'] = proxy_policy.pop('PolicyAttributeDescriptions')
+ proxy_policy['LoadBalancerName'] = self.name
+ self.changed = True
+
+ if self.check_mode:
+ return True
+
+ try:
+ self.client.create_load_balancer_policy(
+ aws_retry=True,
+ **proxy_policy
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to create load balancer policy", policy=proxy_policy)
+
+ return True
+
+ def _get_instance_ids(self):
+ """Get the current list of instance ids installed in the elb"""
+ elb = self.elb or {}
+ return list(i['InstanceId'] for i in elb.get('Instances', []))
+
+ def _change_instances(self, method, instances):
+ if not instances:
+ return False
+
+ self.changed = True
+ if self.check_mode:
+ return True
+
+ instance_id_list = list({'InstanceId': i} for i in instances)
+ try:
+ method(
+ aws_retry=True,
+ LoadBalancerName=self.name,
+ Instances=instance_id_list,
+ )
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to change instance registration",
+ instances=instance_id_list, name=self.name)
+ return True
+
+ def _set_instance_ids(self):
+ """Register or deregister instances from an lb instance"""
+ new_instances = self.instance_ids or []
+ existing_instances = self._get_instance_ids()
+
+ instances_to_add = set(new_instances) - set(existing_instances)
+ if self.purge_instance_ids:
+ instances_to_remove = set(existing_instances) - set(new_instances)
+ else:
+ instances_to_remove = []
+
+ changed = False
+
+ changed |= self._change_instances(self.client.register_instances_with_load_balancer,
+ instances_to_add)
+ if self.wait:
+ self._wait_for_instance_state('instance_in_service', list(instances_to_add))
+ changed |= self._change_instances(self.client.deregister_instances_from_load_balancer,
+ instances_to_remove)
+ if self.wait:
+ self._wait_for_instance_state('instance_deregistered', list(instances_to_remove))
+
+ return changed
+
+ def _get_tags(self):
+ tags = self.client.describe_tags(aws_retry=True,
+ LoadBalancerNames=[self.name])
+ if not tags:
+ return {}
+ try:
+ tags = tags['TagDescriptions'][0]['Tags']
+ except (KeyError, TypeError):
+ return {}
+ return boto3_tag_list_to_ansible_dict(tags)
+
+ def _add_tags(self, tags_to_set):
+ if not tags_to_set:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+ tags_to_add = ansible_dict_to_boto3_tag_list(tags_to_set)
+ self.client.add_tags(LoadBalancerNames=[self.name], Tags=tags_to_add)
+ return True
+
+ def _remove_tags(self, tags_to_unset):
+ if not tags_to_unset:
+ return False
+ self.changed = True
+ if self.check_mode:
+ return True
+ tags_to_remove = [dict(Key=tagkey) for tagkey in tags_to_unset]
+ self.client.remove_tags(LoadBalancerNames=[self.name], Tags=tags_to_remove)
+ return True
+
+ def _set_tags(self):
+ """Add/Delete tags"""
+ if self.tags is None:
+ return False
+
+ try:
+ current_tags = self._get_tags()
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to get load balancer tags")
+
+ tags_to_set, tags_to_unset = compare_aws_tags(current_tags, self.tags,
+ self.purge_tags)
+
+ changed = False
+ try:
+ changed |= self._remove_tags(tags_to_unset)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to remove load balancer tags")
+ try:
+ changed |= self._add_tags(tags_to_set)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ self.module.fail_json_aws(e, msg="Failed to add load balancer tags")
+
+ return changed
+
+ def _validate_stickiness(self, stickiness):
+ problem_found = False
+ if not stickiness:
+ return problem_found
+ if not stickiness['enabled']:
+ return problem_found
+ if stickiness['type'] == 'application':
+ if not stickiness.get('cookie'):
+ problem_found = True
+ self.module.fail_json(
+ msg='cookie must be specified when stickiness type is "application"',
+ stickiness=stickiness,
+ )
+ if stickiness.get('expiration'):
+ self.warn(
+ msg='expiration is ignored when stickiness type is "application"',)
+ if stickiness['type'] == 'loadbalancer':
+ if stickiness.get('cookie'):
+ self.warn(
+ msg='cookie is ignored when stickiness type is "loadbalancer"',)
+ return problem_found
+
+ def _validate_access_logs(self, access_logs):
+ problem_found = False
+ if not access_logs:
+ return problem_found
+ if not access_logs['enabled']:
+ return problem_found
+ if not access_logs.get('s3_location', None):
+ problem_found = True
+ self.module.fail_json(
+ msg='s3_location must be provided when access_logs.state is "present"')
+ return problem_found
+
+ def _validate_creation_requirements(self):
+ if self.elb:
+ return False
+ problem_found = False
+ if not self.subnets and not self.zones:
+ problem_found = True
+ self.module.fail_json(
+ msg='One of subnets or zones must be provided when creating an ELB')
+ if not self.listeners:
+ problem_found = True
+ self.module.fail_json(
+ msg='listeners must be provided when creating an ELB')
+ return problem_found
+
+ def _validate_listeners(self, listeners):
+ if not listeners:
+ return False
+ return any(self._validate_listener(listener) for listener in listeners)
+
+ def _validate_listener(self, listener):
+ problem_found = False
+ if not listener:
+ return problem_found
+ for protocol in ['instance_protocol', 'protocol']:
+ value = listener.get(protocol, None)
+ problem = self._validate_protocol(value)
+ problem_found |= problem
+ if problem:
+ self.module.fail_json(
+ msg='Invalid protocol ({0}) in listener'.format(value),
+ listener=listener)
+ return problem_found
+
+ def _validate_health_check(self, health_check):
+ if not health_check:
+ return False
+ protocol = health_check['ping_protocol']
+ if self._validate_protocol(protocol):
+ self.module.fail_json(
+ msg='Invalid protocol ({0}) defined in health check'.format(protocol),
+ health_check=health_check,)
+ if protocol.upper() in ['HTTP', 'HTTPS']:
+ if not health_check['ping_path']:
+ self.module.fail_json(
+ msg='For HTTP and HTTPS health checks a ping_path must be provided',
+ health_check=health_check,)
+ return False
+
+ def _validate_protocol(self, protocol):
+ if not protocol:
+ return False
+ return protocol.upper() not in ['HTTP', 'HTTPS', 'TCP', 'SSL']
+
+ @AWSRetry.jittered_backoff()
+ def _describe_loadbalancer(self, lb_name):
+ paginator = self.client.get_paginator('describe_load_balancers')
+ return paginator.paginate(LoadBalancerNames=[lb_name]).build_full_result()['LoadBalancerDescriptions']
+
+ def _get_vpc_from_subnets(self, subnets):
+ if not subnets:
+ return None
+
+ subnet_details = self._describe_subnets(list(subnets))
+ vpc_ids = set(subnet['VpcId'] for subnet in subnet_details)
+
+ if not vpc_ids:
+ return None
+ if len(vpc_ids) > 1:
+ self.module.fail_json("Subnets for an ELB may not span multiple VPCs",
+ subnets=subnet_details, vpc_ids=vpc_ids)
+ vpc_id = vpc_ids.pop()
+
+ @AWSRetry.jittered_backoff()
+ def _describe_subnets(self, subnet_ids):
+ paginator = self.ec2_client.get_paginator('describe_subnets')
+ return paginator.paginate(SubnetIds=subnet_ids).build_full_result()['Subnets']
+
+ # Wrap it so we get the backoff
+ @AWSRetry.jittered_backoff()
+ def _get_ec2_security_group_ids_from_names(self, **params):
+ return get_ec2_security_group_ids_from_names(ec2_connection=self.ec2_client, **params)
+
+
+def main():
+
+ access_log_spec = dict(
+ enabled=dict(required=False, type='bool', default=True),
+ s3_location=dict(required=False, type='str'),
+ s3_prefix=dict(required=False, type='str', default=""),
+ interval=dict(required=False, type='int', default=60, choices=[5, 60]),
+ )
+
+ stickiness_spec = dict(
+ type=dict(required=False, type='str', choices=['application', 'loadbalancer']),
+ enabled=dict(required=False, type='bool', default=True),
+ cookie=dict(required=False, type='str'),
+ expiration=dict(required=False, type='int')
+ )
+
+ healthcheck_spec = dict(
+ ping_protocol=dict(required=True, type='str'),
+ ping_path=dict(required=False, type='str'),
+ ping_port=dict(required=True, type='int'),
+ interval=dict(required=True, type='int'),
+ timeout=dict(aliases=['response_timeout'], required=True, type='int'),
+ unhealthy_threshold=dict(required=True, type='int'),
+ healthy_threshold=dict(required=True, type='int'),
+ )
+
+ listeners_spec = dict(
+ load_balancer_port=dict(required=True, type='int'),
+ instance_port=dict(required=True, type='int'),
+ ssl_certificate_id=dict(required=False, type='str'),
+ protocol=dict(required=True, type='str'),
+ instance_protocol=dict(required=False, type='str'),
+ proxy_protocol=dict(required=False, type='bool'),
+ )
+
+ argument_spec = dict(
+ state=dict(required=True, choices=['present', 'absent']),
+ name=dict(required=True),
+ listeners=dict(type='list', elements='dict', options=listeners_spec),
+ purge_listeners=dict(default=True, type='bool'),
+ instance_ids=dict(type='list', elements='str'),
+ purge_instance_ids=dict(default=False, type='bool'),
+ zones=dict(type='list', elements='str'),
+ purge_zones=dict(default=False, type='bool'),
+ security_group_ids=dict(type='list', elements='str'),
+ security_group_names=dict(type='list', elements='str'),
+ health_check=dict(type='dict', options=healthcheck_spec),
+ subnets=dict(type='list', elements='str'),
+ purge_subnets=dict(default=False, type='bool'),
+ scheme=dict(choices=['internal', 'internet-facing']),
+ connection_draining_timeout=dict(type='int'),
+ idle_timeout=dict(type='int'),
+ cross_az_load_balancing=dict(type='bool'),
+ stickiness=dict(type='dict', options=stickiness_spec),
+ access_logs=dict(type='dict', options=access_log_spec),
+ wait=dict(default=False, type='bool'),
+ wait_timeout=dict(default=180, type='int'),
+ tags=dict(type='dict'),
+ purge_tags=dict(default=True, type='bool'),
+ )
+
+ module = AnsibleAWSModule(
+ argument_spec=argument_spec,
+ mutually_exclusive=[
+ ['security_group_ids', 'security_group_names'],
+ ['zones', 'subnets'],
+ ],
+ supports_check_mode=True,
+ )
+
+ wait_timeout = module.params['wait_timeout']
+ state = module.params['state']
+
+ if wait_timeout > 600:
+ module.fail_json(msg='wait_timeout maximum is 600 seconds')
+
+ elb_man = ElbManager(module)
+ elb_man.validate_params(state)
+
+ if state == 'present':
+ elb_man.ensure_ok()
+ # original boto style
+ elb = elb_man.get_info()
+ # boto3 style
+ lb = elb_man.get_load_balancer()
+ ec2_result = dict(elb=elb, load_balancer=lb)
+ elif state == 'absent':
+ elb_man.ensure_gone()
+ # original boto style
+ elb = elb_man.get_info()
+ ec2_result = dict(elb=elb)
+
+ ansible_facts = {'ec2_elb': 'info'}
+
+ module.exit_json(
+ ansible_facts=ansible_facts,
+ changed=elb_man.changed,
+ **ec2_result,
+ )
+
+
+if __name__ == '__main__':
+ main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/s3_bucket.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/s3_bucket.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/modules/s3_bucket.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/modules/s3_bucket.py 2021-11-12 18:13:53.000000000 +0000
@@ -24,8 +24,9 @@
short_description: Manage S3 buckets in AWS, DigitalOcean, Ceph, Walrus, FakeS3 and StorageGRID
description:
- Manage S3 buckets in AWS, DigitalOcean, Ceph, Walrus, FakeS3 and StorageGRID.
-requirements: [ boto3 ]
-author: "Rob White (@wimnat)"
+author:
+ - Rob White (@wimnat)
+ - Aubin Bikouo (@abikouo)
options:
force:
description:
@@ -40,7 +41,7 @@
type: str
policy:
description:
- - The JSON policy as a string.
+ - The JSON policy as a string. Set to the string C("null") to force the absence of a policy.
type: json
s3_url:
description:
@@ -120,6 +121,26 @@
default: false
type: bool
version_added: 1.3.0
+ object_ownership:
+ description:
+ - Allow bucket's ownership controls.
+ - C(BucketOwnerPreferred) - Objects uploaded to the bucket change ownership to the bucket owner
+ if the objects are uploaded with the bucket-owner-full-control canned ACL.
+ - C(ObjectWriter) - The uploading account will own the object
+ if the object is uploaded with the bucket-owner-full-control canned ACL.
+ - This option cannot be used together with a I(delete_object_ownership) definition.
+ - Management of bucket ownership controls requires botocore>=1.18.11.
+ choices: [ 'BucketOwnerPreferred', 'ObjectWriter' ]
+ type: str
+ version_added: 2.0.0
+ delete_object_ownership:
+ description:
+ - Delete bucket's ownership controls.
+ - This option cannot be used together with a I(object_ownership) definition.
+ - Management of bucket ownership controls requires botocore>=1.18.11.
+ default: false
+ type: bool
+ version_added: 2.0.0
extends_documentation_fragment:
- amazon.aws.aws
@@ -193,7 +214,7 @@
public_access:
block_public_acls: true
ignore_public_acls: true
- ## keys == 'false' can be ommited, undefined keys defaults to 'false'
+ ## keys == 'false' can be omitted, undefined keys defaults to 'false'
# block_public_policy: false
# restrict_public_buckets: false
@@ -202,6 +223,24 @@
name: mys3bucket
state: present
delete_public_access: true
+
+# Create a bucket with object ownership controls set to ObjectWriter
+- amazon.aws.s3_bucket:
+ name: mys3bucket
+ state: present
+ object_ownership: ObjectWriter
+
+# Delete onwership controls from bucket
+- amazon.aws.s3_bucket:
+ name: mys3bucket
+ state: present
+ delete_object_ownership: true
+
+# Delete a bucket policy from bucket
+- amazon.aws.s3_bucket:
+ name: mys3bucket
+ state: present
+ policy: "null"
'''
import json
@@ -209,7 +248,7 @@
import time
try:
- from botocore.exceptions import BotoCoreError, ClientError, EndpointConnectionError, WaiterError
+ import botocore
except ImportError:
pass # Handled by AnsibleAWSModule
@@ -226,6 +265,7 @@
from ..module_utils.ec2 import compare_policies
from ..module_utils.ec2 import get_aws_connection_info
from ..module_utils.ec2 import snake_dict_to_camel_dict
+from ..module_utils.s3 import validate_bucket_name
def create_or_update_bucket(s3_client, module, location):
@@ -240,14 +280,16 @@
encryption_key_id = module.params.get("encryption_key_id")
public_access = module.params.get("public_access")
delete_public_access = module.params.get("delete_public_access")
+ delete_object_ownership = module.params.get("delete_object_ownership")
+ object_ownership = module.params.get("object_ownership")
changed = False
result = {}
try:
bucket_is_present = bucket_exists(s3_client, name)
- except EndpointConnectionError as e:
+ except botocore.exceptions.EndpointConnectionError as e:
module.fail_json_aws(e, msg="Invalid endpoint provided: %s" % to_text(e))
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to check bucket presence")
if not bucket_is_present:
@@ -255,19 +297,19 @@
bucket_changed = create_bucket(s3_client, name, location)
s3_client.get_waiter('bucket_exists').wait(Bucket=name)
changed = changed or bucket_changed
- except WaiterError as e:
+ except botocore.exceptions.WaiterError as e:
module.fail_json_aws(e, msg='An error occurred waiting for the bucket to become available')
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed while creating bucket")
# Versioning
try:
versioning_status = get_bucket_versioning(s3_client, name)
- except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as exp:
+ except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as e:
if versioning is not None:
- module.fail_json_aws(exp, msg="Failed to get bucket versioning")
- except (BotoCoreError, ClientError) as exp: # pylint: disable=duplicate-except
- module.fail_json_aws(exp, msg="Failed to get bucket versioning")
+ module.fail_json_aws(e, msg="Failed to get bucket versioning")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get bucket versioning")
else:
if versioning is not None:
required_versioning = None
@@ -280,7 +322,7 @@
try:
put_bucket_versioning(s3_client, name, required_versioning)
changed = True
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to update bucket versioning")
versioning_status = wait_versioning_is_applied(module, s3_client, name, required_versioning)
@@ -294,11 +336,11 @@
# Requester pays
try:
requester_pays_status = get_bucket_request_payment(s3_client, name)
- except is_boto3_error_code(['NotImplemented', 'XNotImplemented']):
+ except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as e:
if requester_pays is not None:
- module.fail_json_aws(exp, msg="Failed to get bucket request payment")
- except (BotoCoreError, ClientError) as exp: # pylint: disable=duplicate-except
- module.fail_json_aws(exp, msg="Failed to get bucket request payment")
+ module.fail_json_aws(e, msg="Failed to get bucket request payment")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get bucket request payment")
else:
if requester_pays is not None:
payer = 'Requester' if requester_pays else 'BucketOwner'
@@ -317,11 +359,11 @@
# Policy
try:
current_policy = get_bucket_policy(s3_client, name)
- except is_boto3_error_code(['NotImplemented', 'XNotImplemented']):
+ except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as e:
if policy is not None:
- module.fail_json_aws(exp, msg="Failed to get bucket policy")
- except (BotoCoreError, ClientError) as exp: # pylint: disable=duplicate-except
- module.fail_json_aws(exp, msg="Failed to get bucket policy")
+ module.fail_json_aws(e, msg="Failed to get bucket policy")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get bucket policy")
else:
if policy is not None:
if isinstance(policy, string_types):
@@ -330,14 +372,14 @@
if not policy and current_policy:
try:
delete_bucket_policy(s3_client, name)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to delete bucket policy")
current_policy = wait_policy_is_applied(module, s3_client, name, policy)
changed = True
elif compare_policies(current_policy, policy):
try:
put_bucket_policy(s3_client, name, policy)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to update bucket policy")
current_policy = wait_policy_is_applied(module, s3_client, name, policy, should_fail=False)
if current_policy is None:
@@ -352,11 +394,11 @@
# Tags
try:
current_tags_dict = get_current_bucket_tags_dict(s3_client, name)
- except is_boto3_error_code(['NotImplemented', 'XNotImplemented']):
+ except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as e:
if tags is not None:
- module.fail_json_aws(exp, msg="Failed to get bucket tags")
- except (ClientError, BotoCoreError) as exp: # pylint: disable=duplicate-except
- module.fail_json_aws(exp, msg="Failed to get bucket tags")
+ module.fail_json_aws(e, msg="Failed to get bucket tags")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get bucket tags")
else:
if tags is not None:
# Tags are always returned as text
@@ -370,13 +412,13 @@
if tags:
try:
put_bucket_tagging(s3_client, name, tags)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to update bucket tags")
else:
if purge_tags:
try:
delete_bucket_tagging(s3_client, name)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to delete bucket tags")
current_tags_dict = wait_tags_are_applied(module, s3_client, name, tags)
changed = True
@@ -386,69 +428,104 @@
# Encryption
try:
current_encryption = get_bucket_encryption(s3_client, name)
- except (ClientError, BotoCoreError) as e:
- module.fail_json_aws(e, msg="Failed to get bucket encryption")
-
- if encryption is not None:
- current_encryption_algorithm = current_encryption.get('SSEAlgorithm') if current_encryption else None
- current_encryption_key = current_encryption.get('KMSMasterKeyID') if current_encryption else None
- if encryption == 'none' and current_encryption_algorithm is not None:
- try:
- delete_bucket_encryption(s3_client, name)
- except (BotoCoreError, ClientError) as e:
- module.fail_json_aws(e, msg="Failed to delete bucket encryption")
- current_encryption = wait_encryption_is_applied(module, s3_client, name, None)
- changed = True
- elif encryption != 'none' and (encryption != current_encryption_algorithm) or (encryption == 'aws:kms' and current_encryption_key != encryption_key_id):
- expected_encryption = {'SSEAlgorithm': encryption}
- if encryption == 'aws:kms' and encryption_key_id is not None:
- expected_encryption.update({'KMSMasterKeyID': encryption_key_id})
- current_encryption = put_bucket_encryption_with_retry(module, s3_client, name, expected_encryption)
- changed = True
+ except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as e:
+ if encryption is not None:
+ module.fail_json_aws(e, msg="Failed to get bucket encryption settings")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get bucket encryption settings")
+ else:
+ if encryption is not None:
+ current_encryption_algorithm = current_encryption.get('SSEAlgorithm') if current_encryption else None
+ current_encryption_key = current_encryption.get('KMSMasterKeyID') if current_encryption else None
+ if encryption == 'none':
+ if current_encryption_algorithm is not None:
+ try:
+ delete_bucket_encryption(s3_client, name)
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
+ module.fail_json_aws(e, msg="Failed to delete bucket encryption")
+ current_encryption = wait_encryption_is_applied(module, s3_client, name, None)
+ changed = True
+ else:
+ if (encryption != current_encryption_algorithm) or (encryption == 'aws:kms' and current_encryption_key != encryption_key_id):
+ expected_encryption = {'SSEAlgorithm': encryption}
+ if encryption == 'aws:kms' and encryption_key_id is not None:
+ expected_encryption.update({'KMSMasterKeyID': encryption_key_id})
+ current_encryption = put_bucket_encryption_with_retry(module, s3_client, name, expected_encryption)
+ changed = True
result['encryption'] = current_encryption
# Public access clock configuration
current_public_access = {}
- # -- Create / Update public access block
- if public_access is not None:
- try:
- current_public_access = get_bucket_public_access(s3_client, name)
- except (ClientError, BotoCoreError) as err_public_access:
- module.fail_json_aws(err_public_access, msg="Failed to get bucket public access configuration")
- camel_public_block = snake_dict_to_camel_dict(public_access, capitalize_first=True)
-
- if current_public_access == camel_public_block:
- result['public_access_block'] = current_public_access
- else:
- put_bucket_public_access(s3_client, name, camel_public_block)
- changed = True
- result['public_access_block'] = camel_public_block
+ try:
+ current_public_access = get_bucket_public_access(s3_client, name)
+ except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as e:
+ if public_access is not None:
+ module.fail_json_aws(e, msg="Failed to get bucket public access configuration")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get bucket public access configuration")
+ else:
+ # -- Create / Update public access block
+ if public_access is not None:
+ camel_public_block = snake_dict_to_camel_dict(public_access, capitalize_first=True)
+
+ if current_public_access == camel_public_block:
+ result['public_access_block'] = current_public_access
+ else:
+ put_bucket_public_access(s3_client, name, camel_public_block)
+ changed = True
+ result['public_access_block'] = camel_public_block
- # -- Delete public access block
- if delete_public_access:
- try:
- current_public_access = get_bucket_public_access(s3_client, name)
- except (ClientError, BotoCoreError) as err_public_access:
- module.fail_json_aws(err_public_access, msg="Failed to get bucket public access configuration")
+ # -- Delete public access block
+ if delete_public_access:
+ if current_public_access == {}:
+ result['public_access_block'] = current_public_access
+ else:
+ delete_bucket_public_access(s3_client, name)
+ changed = True
+ result['public_access_block'] = {}
- if current_public_access == {}:
- result['public_access_block'] = current_public_access
- else:
- delete_bucket_public_access(s3_client, name)
- changed = True
- result['public_access_block'] = {}
+ # -- Bucket ownership
+ try:
+ bucket_ownership = get_bucket_ownership_cntrl(s3_client, module, name)
+ result['object_ownership'] = bucket_ownership
+ except KeyError as e:
+ # Some non-AWS providers appear to return policy documents that aren't
+ # compatible with AWS, cleanly catch KeyError so users can continue to use
+ # other features.
+ if delete_object_ownership or object_ownership is not None:
+ module.fail_json_aws(e, msg="Failed to get bucket object ownership settings")
+ except is_boto3_error_code(['NotImplemented', 'XNotImplemented']) as e:
+ if delete_object_ownership or object_ownership is not None:
+ module.fail_json_aws(e, msg="Failed to get bucket object ownership settings")
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
+ module.fail_json_aws(e, msg="Failed to get bucket bucket object ownership settings")
+ else:
+ if delete_object_ownership:
+ # delete S3 buckect ownership
+ if bucket_ownership is not None:
+ delete_bucket_ownership(s3_client, name)
+ changed = True
+ result['object_ownership'] = None
+ elif object_ownership is not None:
+ # update S3 bucket ownership
+ if bucket_ownership != object_ownership:
+ put_bucket_ownership(s3_client, name, object_ownership)
+ changed = True
+ result['object_ownership'] = object_ownership
# Module exit
module.exit_json(changed=changed, name=name, **result)
def bucket_exists(s3_client, bucket_name):
- # head_bucket appeared to be really inconsistent, so we use list_buckets instead,
- # and loop over all the buckets, even if we know it's less performant :(
- all_buckets = s3_client.list_buckets(Bucket=bucket_name)['Buckets']
- return any(bucket['Name'] == bucket_name for bucket in all_buckets)
+ try:
+ s3_client.head_bucket(Bucket=bucket_name)
+ bucket_exists = True
+ except is_boto3_error_code('404'):
+ bucket_exists = False
+ return bucket_exists
@AWSRetry.exponential_backoff(max_delay=120)
@@ -515,9 +592,6 @@
@AWSRetry.exponential_backoff(max_delay=120, catch_extra_error_codes=['NoSuchBucket', 'OperationAborted'])
def get_bucket_encryption(s3_client, bucket_name):
- if not hasattr(s3_client, "get_bucket_encryption"):
- return None
-
try:
result = s3_client.get_bucket_encryption(Bucket=bucket_name)
return result.get('ServerSideEncryptionConfiguration', {}).get('Rules', [])[0].get('ApplyServerSideEncryptionByDefault')
@@ -532,7 +606,7 @@
for retries in range(1, max_retries + 1):
try:
put_bucket_encryption(s3_client, name, expected_encryption)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e: # pylint: disable=duplicate-except
module.fail_json_aws(e, msg="Failed to set bucket encryption")
current_encryption = wait_encryption_is_applied(module, s3_client, name, expected_encryption,
should_fail=(retries == max_retries), retries=5)
@@ -588,11 +662,31 @@
s3_client.delete_public_access_block(Bucket=bucket_name)
+@AWSRetry.exponential_backoff(max_delay=120, catch_extra_error_codes=['NoSuchBucket', 'OperationAborted'])
+def delete_bucket_ownership(s3_client, bucket_name):
+ '''
+ Delete bucket ownership controls from S3 bucket
+ '''
+ s3_client.delete_bucket_ownership_controls(Bucket=bucket_name)
+
+
+@AWSRetry.exponential_backoff(max_delay=120, catch_extra_error_codes=['NoSuchBucket', 'OperationAborted'])
+def put_bucket_ownership(s3_client, bucket_name, target):
+ '''
+ Put bucket ownership controls for S3 bucket
+ '''
+ s3_client.put_bucket_ownership_controls(
+ Bucket=bucket_name,
+ OwnershipControls={
+ 'Rules': [{'ObjectOwnership': target}]
+ })
+
+
def wait_policy_is_applied(module, s3_client, bucket_name, expected_policy, should_fail=True):
for dummy in range(0, 12):
try:
current_policy = get_bucket_policy(s3_client, bucket_name)
- except (ClientError, BotoCoreError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to get bucket policy")
if compare_policies(current_policy, expected_policy):
@@ -610,7 +704,7 @@
for dummy in range(0, 12):
try:
requester_pays_status = get_bucket_request_payment(s3_client, bucket_name)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to get bucket request payment")
if requester_pays_status != expected_payer:
time.sleep(5)
@@ -627,7 +721,7 @@
for dummy in range(0, retries):
try:
encryption = get_bucket_encryption(s3_client, bucket_name)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to get updated encryption for bucket")
if encryption != expected_encryption:
time.sleep(5)
@@ -645,7 +739,7 @@
for dummy in range(0, 24):
try:
versioning_status = get_bucket_versioning(s3_client, bucket_name)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to get updated versioning for bucket")
if versioning_status.get('Status') != required_versioning:
time.sleep(8)
@@ -659,7 +753,7 @@
for dummy in range(0, 12):
try:
current_tags_dict = get_current_bucket_tags_dict(s3_client, bucket_name)
- except (ClientError, BotoCoreError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to get bucket policy")
if current_tags_dict != expected_tags_dict:
time.sleep(5)
@@ -692,6 +786,19 @@
return {}
+def get_bucket_ownership_cntrl(s3_client, module, bucket_name):
+ '''
+ Get current bucket public access block
+ '''
+ if not module.botocore_at_least('1.18.11'):
+ return None
+ try:
+ bucket_ownership = s3_client.get_bucket_ownership_controls(Bucket=bucket_name)
+ return bucket_ownership['OwnershipControls']['Rules'][0]['ObjectOwnership']
+ except is_boto3_error_code(['OwnershipControlsNotFoundError', 'NoSuchOwnershipControls']):
+ return None
+
+
def paginated_list(s3_client, **pagination_params):
pg = s3_client.get_paginator('list_objects_v2')
for page in pg.paginate(**pagination_params):
@@ -714,9 +821,9 @@
name = module.params.get("name")
try:
bucket_is_present = bucket_exists(s3_client, name)
- except EndpointConnectionError as e:
+ except botocore.exceptions.EndpointConnectionError as e:
module.fail_json_aws(e, msg="Invalid endpoint provided: %s" % to_text(e))
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to check bucket presence")
if not bucket_is_present:
@@ -744,15 +851,15 @@
),
errors=resp['Errors'], response=resp
)
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed while deleting bucket")
try:
delete_bucket(s3_client, name)
s3_client.get_waiter('bucket_not_exists').wait(Bucket=name, WaiterConfig=dict(Delay=5, MaxAttempts=60))
- except WaiterError as e:
+ except botocore.exceptions.WaiterError as e:
module.fail_json_aws(e, msg='An error occurred waiting for the bucket to be deleted.')
- except (BotoCoreError, ClientError) as e:
+ except (botocore.exceptions.BotoCoreError, botocore.exceptions.ClientError) as e:
module.fail_json_aws(e, msg="Failed to delete bucket")
module.exit_json(changed=True)
@@ -809,7 +916,9 @@
ignore_public_acls=dict(type='bool', default=False),
block_public_policy=dict(type='bool', default=False),
restrict_public_buckets=dict(type='bool', default=False))),
- delete_public_access=dict(type='bool', default=False)
+ delete_public_access=dict(type='bool', default=False),
+ object_ownership=dict(type='str', choices=['BucketOwnerPreferred', 'ObjectWriter']),
+ delete_object_ownership=dict(type='bool', default=False),
)
required_by = dict(
@@ -817,7 +926,8 @@
)
mutually_exclusive = [
- ['public_access', 'delete_public_access']
+ ['public_access', 'delete_public_access'],
+ ['delete_object_ownership', 'object_ownership']
]
module = AnsibleAWSModule(
@@ -825,6 +935,7 @@
)
region, ec2_url, aws_connect_kwargs = get_aws_connection_info(module, boto3=True)
+ validate_bucket_name(module, module.params["name"])
if region in ('us-east-1', '', None):
# default to US Standard region
@@ -852,15 +963,16 @@
s3_client = get_s3_client(module, aws_connect_kwargs, location, ceph, s3_url)
if s3_client is None: # this should never happen
- module.fail_json(msg='Unknown error, failed to create s3 connection, no information from boto.')
+ module.fail_json(msg='Unknown error, failed to create s3 connection, no information available.')
state = module.params.get("state")
encryption = module.params.get("encryption")
encryption_key_id = module.params.get("encryption_key_id")
+ delete_object_ownership = module.params.get('delete_object_ownership')
+ object_ownership = module.params.get('object_ownership')
- if not hasattr(s3_client, "get_bucket_encryption"):
- if encryption is not None:
- module.fail_json(msg="Using bucket encryption requires botocore version >= 1.7.41")
+ if delete_object_ownership or object_ownership:
+ module.require_botocore_at_least('1.18.11', reason='to manipulate bucket ownership controls')
# Parameter validation
if encryption_key_id is not None and encryption != 'aws:kms':
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/cloud.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/cloud.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/cloud.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/cloud.py 2021-11-12 18:13:53.000000000 +0000
@@ -1,221 +1,206 @@
+# Copyright (c) 2021 Ansible Project
#
-# (c) 2016 Allen Sanabria,
-#
-# This file is part of Ansible
-#
-# Ansible is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# Ansible is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with Ansible. If not, see .
+# This code is part of Ansible, but is an independent component.
+# This particular file snippet, and this file snippet only, is BSD licensed.
+# Modules you write using this snippet, which is embedded dynamically by Ansible
+# still belong to the author of the module, and may assign their own license
+# to the complete work.
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
+
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
-"""
-This module adds shared support for generic cloud modules
-
-In order to use this module, include it as part of a custom
-module as shown below.
-
-from ansible.module_utils.cloud import CloudRetry
-
-The 'cloud' module provides the following common classes:
-
- * CloudRetry
- - The base class to be used by other cloud providers, in order to
- provide a backoff/retry decorator based on status codes.
-
- - Example using the AWSRetry class which inherits from CloudRetry.
-
- @AWSRetry.exponential_backoff(retries=10, delay=3)
- get_ec2_security_group_ids_from_names()
-
- @AWSRetry.jittered_backoff()
- get_ec2_security_group_ids_from_names()
-
-"""
-import random
-from functools import wraps
-import syslog
import time
+import functools
+import random
-def _exponential_backoff(retries=10, delay=2, backoff=2, max_delay=60):
- """ Customizable exponential backoff strategy.
+class BackoffIterator:
+ """iterate sleep value based on the exponential or jitter back-off algorithm.
Args:
- retries (int): Maximum number of times to retry a request.
- delay (float): Initial (base) delay.
- backoff (float): base of the exponent to use for exponential
- backoff.
- max_delay (int): Optional. If provided each delay generated is capped
- at this amount. Defaults to 60 seconds.
- Returns:
- Callable that returns a generator. This generator yields durations in
- seconds to be used as delays for an exponential backoff strategy.
- Usage:
- >>> backoff = _exponential_backoff()
- >>> backoff
-
- >>> list(backoff())
- [2, 4, 8, 16, 32, 60, 60, 60, 60, 60]
+ delay (int or float): initial delay.
+ backoff (int or float): backoff multiplier e.g. value of 2 will double the delay each retry.
+ max_delay (int or None): maximum amount of time to wait between retries.
+ jitter (bool): if set to true, add jitter to the generate value.
"""
- def backoff_gen():
- for retry in range(0, retries):
- sleep = delay * backoff ** retry
- yield sleep if max_delay is None else min(sleep, max_delay)
- return backoff_gen
-
-def _full_jitter_backoff(retries=10, delay=3, max_delay=60, _random=random):
- """ Implements the "Full Jitter" backoff strategy described here
- https://www.awsarchitectureblog.com/2015/03/backoff.html
- Args:
- retries (int): Maximum number of times to retry a request.
- delay (float): Approximate number of seconds to sleep for the first
- retry.
- max_delay (int): The maximum number of seconds to sleep for any retry.
- _random (random.Random or None): Makes this generator testable by
- allowing developers to explicitly pass in the a seeded Random.
- Returns:
- Callable that returns a generator. This generator yields durations in
- seconds to be used as delays for a full jitter backoff strategy.
- Usage:
- >>> backoff = _full_jitter_backoff(retries=5)
- >>> backoff
-
- >>> list(backoff())
- [3, 6, 5, 23, 38]
- >>> list(backoff())
- [2, 1, 6, 6, 31]
- """
- def backoff_gen():
- for retry in range(0, retries):
- yield _random.randint(0, min(max_delay, delay * 2 ** retry))
- return backoff_gen
+ def __init__(self, delay, backoff, max_delay=None, jitter=False):
+ self.delay = delay
+ self.backoff = backoff
+ self.max_delay = max_delay
+ self.jitter = jitter
+
+ def __iter__(self):
+ self.current_delay = self.delay
+ return self
+
+ def __next__(self):
+ return_value = self.current_delay if self.max_delay is None else min(self.current_delay, self.max_delay)
+ if self.jitter:
+ return_value = random.uniform(0.0, return_value)
+ self.current_delay *= self.backoff
+ return return_value
+
+
+def _retry_func(func, sleep_time_generator, retries, catch_extra_error_codes, found_f, status_code_from_except_f, base_class):
+ counter = 0
+ for sleep_time in sleep_time_generator:
+ try:
+ return func()
+ except Exception as exc:
+ counter += 1
+ if counter == retries:
+ raise
+ if base_class and not isinstance(exc, base_class):
+ raise
+ status_code = status_code_from_except_f(exc)
+ if found_f(status_code, catch_extra_error_codes):
+ time.sleep(sleep_time)
+ else:
+ raise
-class CloudRetry(object):
- """ CloudRetry can be used by any cloud provider, in order to implement a
- backoff algorithm/retry effect based on Status Code from Exceptions.
+class CloudRetry:
+ """
+ The base class to be used by other cloud providers to provide a backoff/retry decorator based on status codes.
"""
- # This is the base class of the exception.
- # AWS Example botocore.exceptions.ClientError
- # NoneType can't be raised (it's not a subclass of Exception) so would never be caught by an except.
+
base_class = type(None)
@staticmethod
def status_code_from_exception(error):
- """ Return the status code from the exception object
+ """
+ Returns the Error 'code' from an exception.
Args:
- error (object): The exception itself.
+ error: The Exception from which the error code is to be extracted.
+ error will be an instance of class.base_class.
"""
- pass
+ raise NotImplementedError()
@staticmethod
def found(response_code, catch_extra_error_codes=None):
- """ Return True if the Response Code to retry on was found.
- Args:
- response_code (str): This is the Response Code that is being matched against.
- """
- pass
+ def _is_iterable():
+ try:
+ it = iter(catch_extra_error_codes)
+ except TypeError:
+ # not iterable
+ return False
+ else:
+ # iterable
+ return True
+ return _is_iterable() and response_code in catch_extra_error_codes
@classmethod
- def _backoff(cls, backoff_strategy, catch_extra_error_codes=None):
- """ Retry calling the Cloud decorated function using the provided
- backoff strategy.
- Args:
- backoff_strategy (callable): Callable that returns a generator. The
- generator should yield sleep times for each retry of the decorated
- function.
- """
- def deco(f):
- @wraps(f)
- def retry_func(*args, **kwargs):
- for delay in backoff_strategy():
- try:
- return f(*args, **kwargs)
- except Exception as e:
- if isinstance(e, cls.base_class):
- response_code = cls.status_code_from_exception(e)
- if cls.found(response_code, catch_extra_error_codes):
- msg = "{0}: Retrying in {1} seconds...".format(str(e), delay)
- syslog.syslog(syslog.LOG_INFO, msg)
- time.sleep(delay)
- else:
- # Return original exception if exception is not a ClientError
- raise e
- else:
- # Return original exception if exception is not a ClientError
- raise e
- return f(*args, **kwargs)
-
- return retry_func # true decorator
-
- return deco
+ def base_decorator(cls, retries, found, status_code_from_exception, catch_extra_error_codes, sleep_time_generator):
+ def retry_decorator(func):
+ @functools.wraps(func)
+ def _retry_wrapper(*args, **kwargs):
+ partial_func = functools.partial(func, *args, **kwargs)
+ return _retry_func(
+ func=partial_func,
+ sleep_time_generator=sleep_time_generator,
+ retries=retries,
+ catch_extra_error_codes=catch_extra_error_codes,
+ found_f=found,
+ status_code_from_except_f=status_code_from_exception,
+ base_class=cls.base_class,
+ )
+ return _retry_wrapper
+ return retry_decorator
@classmethod
def exponential_backoff(cls, retries=10, delay=3, backoff=2, max_delay=60, catch_extra_error_codes=None):
- """
- Retry calling the Cloud decorated function using an exponential backoff.
-
- Kwargs:
+ """Wrap a callable with retry behavior.
+ Args:
retries (int): Number of times to retry a failed request before giving up
default=10
delay (int or float): Initial delay between retries in seconds
default=3
- backoff (int or float): backoff multiplier e.g. value of 2 will
- double the delay each retry
- default=1.1
+ backoff (int or float): backoff multiplier e.g. value of 2 will double the delay each retry
+ default=2
max_delay (int or None): maximum amount of time to wait between retries.
default=60
- """
- return cls._backoff(_exponential_backoff(
- retries=retries, delay=delay, backoff=backoff, max_delay=max_delay), catch_extra_error_codes)
+ catch_extra_error_codes: Additional error messages to catch, in addition to those which may be defined by a subclass of CloudRetry
+ default=None
+ Returns:
+ Callable: A generator that calls the decorated function using an exponential backoff.
+ """
+ sleep_time_generator = BackoffIterator(delay=delay, backoff=backoff, max_delay=max_delay)
+ return cls.base_decorator(
+ retries=retries,
+ found=cls.found,
+ status_code_from_exception=cls.status_code_from_exception,
+ catch_extra_error_codes=catch_extra_error_codes,
+ sleep_time_generator=sleep_time_generator,
+ )
@classmethod
- def jittered_backoff(cls, retries=10, delay=3, max_delay=60, catch_extra_error_codes=None):
- """
- Retry calling the Cloud decorated function using a jittered backoff
- strategy. More on this strategy here:
-
- https://www.awsarchitectureblog.com/2015/03/backoff.html
-
- Kwargs:
+ def jittered_backoff(cls, retries=10, delay=3, backoff=2.0, max_delay=60, catch_extra_error_codes=None):
+ """Wrap a callable with retry behavior.
+ Args:
retries (int): Number of times to retry a failed request before giving up
default=10
- delay (int): Initial delay between retries in seconds
+ delay (int or float): Initial delay between retries in seconds
default=3
- max_delay (int): maximum amount of time to wait between retries.
+ backoff (int or float): backoff multiplier e.g. value of 2 will double the delay each retry
+ default=2.0
+ max_delay (int or None): maximum amount of time to wait between retries.
default=60
- """
- return cls._backoff(_full_jitter_backoff(
- retries=retries, delay=delay, max_delay=max_delay), catch_extra_error_codes)
+ catch_extra_error_codes: Additional error messages to catch, in addition to those which may be defined by a subclass of CloudRetry
+ default=None
+ Returns:
+ Callable: A generator that calls the decorated function using using a jittered backoff strategy.
+ """
+ sleep_time_generator = BackoffIterator(delay=delay, backoff=backoff, max_delay=max_delay, jitter=True)
+ return cls.base_decorator(
+ retries=retries,
+ found=cls.found,
+ status_code_from_exception=cls.status_code_from_exception,
+ catch_extra_error_codes=catch_extra_error_codes,
+ sleep_time_generator=sleep_time_generator,
+ )
@classmethod
def backoff(cls, tries=10, delay=3, backoff=1.1, catch_extra_error_codes=None):
"""
- Retry calling the Cloud decorated function using an exponential backoff.
-
- Compatibility for the original implementation of CloudRetry.backoff that
- did not provide configurable backoff strategies. Developers should use
- CloudRetry.exponential_backoff instead.
-
- Kwargs:
- tries (int): Number of times to try (not retry) before giving up
+ Wrap a callable with retry behavior.
+ Developers should use CloudRetry.exponential_backoff instead.
+ This method has been deprecated and will be removed in release 4.0.0, consider using exponential_backoff method instead.
+ Args:
+ retries (int): Number of times to retry a failed request before giving up
default=10
delay (int or float): Initial delay between retries in seconds
default=3
- backoff (int or float): backoff multiplier e.g. value of 2 will
- double the delay each retry
+ backoff (int or float): backoff multiplier e.g. value of 2 will double the delay each retry
default=1.1
+ catch_extra_error_codes: Additional error messages to catch, in addition to those which may be defined by a subclass of CloudRetry
+ default=None
+ Returns:
+ Callable: A generator that calls the decorated function using an exponential backoff.
"""
return cls.exponential_backoff(
- retries=tries - 1, delay=delay, backoff=backoff, max_delay=None, catch_extra_error_codes=catch_extra_error_codes)
+ retries=tries,
+ delay=delay,
+ backoff=backoff,
+ max_delay=None,
+ catch_extra_error_codes=catch_extra_error_codes,
+ )
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/compat/_ipaddress.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/compat/_ipaddress.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/compat/_ipaddress.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/compat/_ipaddress.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,2479 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# This code is part of Ansible, but is an independent component.
-# This particular file, and this file only, is based on
-# Lib/ipaddress.py of cpython
-# It is licensed under the PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
-#
-# 1. This LICENSE AGREEMENT is between the Python Software Foundation
-# ("PSF"), and the Individual or Organization ("Licensee") accessing and
-# otherwise using this software ("Python") in source or binary form and
-# its associated documentation.
-#
-# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
-# grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
-# analyze, test, perform and/or display publicly, prepare derivative works,
-# distribute, and otherwise use Python alone or in any derivative version,
-# provided, however, that PSF's License Agreement and PSF's notice of copyright,
-# i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
-# 2011, 2012, 2013, 2014, 2015 Python Software Foundation; All Rights Reserved"
-# are retained in Python alone or in any derivative version prepared by Licensee.
-#
-# 3. In the event Licensee prepares a derivative work that is based on
-# or incorporates Python or any part thereof, and wants to make
-# the derivative work available to others as provided herein, then
-# Licensee hereby agrees to include in any such work a brief summary of
-# the changes made to Python.
-#
-# 4. PSF is making Python available to Licensee on an "AS IS"
-# basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
-# IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
-# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
-# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
-# INFRINGE ANY THIRD PARTY RIGHTS.
-#
-# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
-# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
-# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
-# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
-#
-# 6. This License Agreement will automatically terminate upon a material
-# breach of its terms and conditions.
-#
-# 7. Nothing in this License Agreement shall be deemed to create any
-# relationship of agency, partnership, or joint venture between PSF and
-# Licensee. This License Agreement does not grant permission to use PSF
-# trademarks or trade name in a trademark sense to endorse or promote
-# products or services of Licensee, or any third party.
-#
-# 8. By copying, installing or otherwise using Python, Licensee
-# agrees to be bound by the terms and conditions of this License
-# Agreement.
-
-# Copyright 2007 Google Inc.
-# Licensed to PSF under a Contributor Agreement.
-
-"""A fast, lightweight IPv4/IPv6 manipulation library in Python.
-
-This library is used to create/poke/manipulate IPv4 and IPv6 addresses
-and networks.
-
-This library is intended for internal use within the amazon.aws collection only.
-"""
-
-from __future__ import unicode_literals
-from __future__ import absolute_import, division, print_function
-__metaclass__ = type
-
-
-import itertools
-import struct
-
-
-# The following makes it easier for us to script updates of the bundled code and is not part of
-# upstream
-_BUNDLED_METADATA = {"pypi_name": "ipaddress", "version": "1.0.22"}
-
-__version__ = '1.0.22'
-
-# Compatibility functions
-_compat_int_types = (int,)
-try:
- _compat_int_types = (int, long)
-except NameError:
- pass
-try:
- _compat_str = unicode
-except NameError:
- _compat_str = str
- assert bytes != str
-if b'\0'[0] == 0: # Python 3 semantics
- def _compat_bytes_to_byte_vals(byt):
- return byt
-else:
- def _compat_bytes_to_byte_vals(byt):
- return [struct.unpack(b'!B', b)[0] for b in byt]
-try:
- _compat_int_from_byte_vals = int.from_bytes
-except AttributeError:
- def _compat_int_from_byte_vals(bytvals, endianess):
- assert endianess == 'big'
- res = 0
- for bv in bytvals:
- assert isinstance(bv, _compat_int_types)
- res = (res << 8) + bv
- return res
-
-
-def _compat_to_bytes(intval, length, endianess):
- assert isinstance(intval, _compat_int_types)
- assert endianess == 'big'
- if length == 4:
- if intval < 0 or intval >= 2 ** 32:
- raise struct.error("integer out of range for 'I' format code")
- return struct.pack(b'!I', intval)
- elif length == 16:
- if intval < 0 or intval >= 2 ** 128:
- raise struct.error("integer out of range for 'QQ' format code")
- return struct.pack(b'!QQ', intval >> 64, intval & 0xffffffffffffffff)
- else:
- raise NotImplementedError()
-
-
-if hasattr(int, 'bit_length'):
- # Not int.bit_length , since that won't work in 2.7 where long exists
- def _compat_bit_length(i):
- return i.bit_length()
-else:
- def _compat_bit_length(i):
- for res in itertools.count():
- if i >> res == 0:
- return res
-
-
-def _compat_range(start, end, step=1):
- assert step > 0
- i = start
- while i < end:
- yield i
- i += step
-
-
-class _TotalOrderingMixin(object):
- __slots__ = ()
-
- # Helper that derives the other comparison operations from
- # __lt__ and __eq__
- # We avoid functools.total_ordering because it doesn't handle
- # NotImplemented correctly yet (http://bugs.python.org/issue10042)
- def __eq__(self, other):
- raise NotImplementedError
-
- def __ne__(self, other):
- equal = self.__eq__(other)
- if equal is NotImplemented:
- return NotImplemented
- return not equal
-
- def __lt__(self, other):
- raise NotImplementedError
-
- def __le__(self, other):
- less = self.__lt__(other)
- if less is NotImplemented or not less:
- return self.__eq__(other)
- return less
-
- def __gt__(self, other):
- less = self.__lt__(other)
- if less is NotImplemented:
- return NotImplemented
- equal = self.__eq__(other)
- if equal is NotImplemented:
- return NotImplemented
- return not (less or equal)
-
- def __ge__(self, other):
- less = self.__lt__(other)
- if less is NotImplemented:
- return NotImplemented
- return not less
-
-
-IPV4LENGTH = 32
-IPV6LENGTH = 128
-
-
-class AddressValueError(ValueError):
- """A Value Error related to the address."""
-
-
-class NetmaskValueError(ValueError):
- """A Value Error related to the netmask."""
-
-
-def ip_address(address):
- """Take an IP string/int and return an object of the correct type.
-
- Args:
- address: A string or integer, the IP address. Either IPv4 or
- IPv6 addresses may be supplied; integers less than 2**32 will
- be considered to be IPv4 by default.
-
- Returns:
- An IPv4Address or IPv6Address object.
-
- Raises:
- ValueError: if the *address* passed isn't either a v4 or a v6
- address
-
- """
- try:
- return IPv4Address(address)
- except (AddressValueError, NetmaskValueError):
- pass
-
- try:
- return IPv6Address(address)
- except (AddressValueError, NetmaskValueError):
- pass
-
- if isinstance(address, bytes):
- raise AddressValueError(
- '%r does not appear to be an IPv4 or IPv6 address. '
- 'Did you pass in a bytes (str in Python 2) instead of'
- ' a unicode object?' % address)
-
- raise ValueError('%r does not appear to be an IPv4 or IPv6 address' %
- address)
-
-
-def ip_network(address, strict=True):
- """Take an IP string/int and return an object of the correct type.
-
- Args:
- address: A string or integer, the IP network. Either IPv4 or
- IPv6 networks may be supplied; integers less than 2**32 will
- be considered to be IPv4 by default.
-
- Returns:
- An IPv4Network or IPv6Network object.
-
- Raises:
- ValueError: if the string passed isn't either a v4 or a v6
- address. Or if the network has host bits set.
-
- """
- try:
- return IPv4Network(address, strict)
- except (AddressValueError, NetmaskValueError):
- pass
-
- try:
- return IPv6Network(address, strict)
- except (AddressValueError, NetmaskValueError):
- pass
-
- if isinstance(address, bytes):
- raise AddressValueError(
- '%r does not appear to be an IPv4 or IPv6 network. '
- 'Did you pass in a bytes (str in Python 2) instead of'
- ' a unicode object?' % address)
-
- raise ValueError('%r does not appear to be an IPv4 or IPv6 network' %
- address)
-
-
-def ip_interface(address):
- """Take an IP string/int and return an object of the correct type.
-
- Args:
- address: A string or integer, the IP address. Either IPv4 or
- IPv6 addresses may be supplied; integers less than 2**32 will
- be considered to be IPv4 by default.
-
- Returns:
- An IPv4Interface or IPv6Interface object.
-
- Raises:
- ValueError: if the string passed isn't either a v4 or a v6
- address.
-
- Notes:
- The IPv?Interface classes describe an Address on a particular
- Network, so they're basically a combination of both the Address
- and Network classes.
-
- """
- try:
- return IPv4Interface(address)
- except (AddressValueError, NetmaskValueError):
- pass
-
- try:
- return IPv6Interface(address)
- except (AddressValueError, NetmaskValueError):
- pass
-
- raise ValueError('%r does not appear to be an IPv4 or IPv6 interface' %
- address)
-
-
-def v4_int_to_packed(address):
- """Represent an address as 4 packed bytes in network (big-endian) order.
-
- Args:
- address: An integer representation of an IPv4 IP address.
-
- Returns:
- The integer address packed as 4 bytes in network (big-endian) order.
-
- Raises:
- ValueError: If the integer is negative or too large to be an
- IPv4 IP address.
-
- """
- try:
- return _compat_to_bytes(address, 4, 'big')
- except (struct.error, OverflowError):
- raise ValueError("Address negative or too large for IPv4")
-
-
-def v6_int_to_packed(address):
- """Represent an address as 16 packed bytes in network (big-endian) order.
-
- Args:
- address: An integer representation of an IPv6 IP address.
-
- Returns:
- The integer address packed as 16 bytes in network (big-endian) order.
-
- """
- try:
- return _compat_to_bytes(address, 16, 'big')
- except (struct.error, OverflowError):
- raise ValueError("Address negative or too large for IPv6")
-
-
-def _split_optional_netmask(address):
- """Helper to split the netmask and raise AddressValueError if needed"""
- addr = _compat_str(address).split('/')
- if len(addr) > 2:
- raise AddressValueError("Only one '/' permitted in %r" % address)
- return addr
-
-
-def _find_address_range(addresses):
- """Find a sequence of sorted deduplicated IPv#Address.
-
- Args:
- addresses: a list of IPv#Address objects.
-
- Yields:
- A tuple containing the first and last IP addresses in the sequence.
-
- """
- it = iter(addresses)
- first = last = next(it) # pylint: disable=stop-iteration-return
- for ip in it:
- if ip._ip != last._ip + 1:
- yield first, last
- first = ip
- last = ip
- yield first, last
-
-
-def _count_righthand_zero_bits(number, bits):
- """Count the number of zero bits on the right hand side.
-
- Args:
- number: an integer.
- bits: maximum number of bits to count.
-
- Returns:
- The number of zero bits on the right hand side of the number.
-
- """
- if number == 0:
- return bits
- return min(bits, _compat_bit_length(~number & (number - 1)))
-
-
-def summarize_address_range(first, last):
- """Summarize a network range given the first and last IP addresses.
-
- Example:
- >>> list(summarize_address_range(IPv4Address('192.0.2.0'),
- ... IPv4Address('192.0.2.130')))
- ... #doctest: +NORMALIZE_WHITESPACE
- [IPv4Network('192.0.2.0/25'), IPv4Network('192.0.2.128/31'),
- IPv4Network('192.0.2.130/32')]
-
- Args:
- first: the first IPv4Address or IPv6Address in the range.
- last: the last IPv4Address or IPv6Address in the range.
-
- Returns:
- An iterator of the summarized IPv(4|6) network objects.
-
- Raise:
- TypeError:
- If the first and last objects are not IP addresses.
- If the first and last objects are not the same version.
- ValueError:
- If the last object is not greater than the first.
- If the version of the first address is not 4 or 6.
-
- """
- if (not (isinstance(first, _BaseAddress) and
- isinstance(last, _BaseAddress))):
- raise TypeError('first and last must be IP addresses, not networks')
- if first.version != last.version:
- raise TypeError("%s and %s are not of the same version" % (
- first, last))
- if first > last:
- raise ValueError('last IP address must be greater than first')
-
- if first.version == 4:
- ip = IPv4Network
- elif first.version == 6:
- ip = IPv6Network
- else:
- raise ValueError('unknown IP version')
-
- ip_bits = first._max_prefixlen
- first_int = first._ip
- last_int = last._ip
- while first_int <= last_int:
- nbits = min(_count_righthand_zero_bits(first_int, ip_bits),
- _compat_bit_length(last_int - first_int + 1) - 1)
- net = ip((first_int, ip_bits - nbits))
- yield net
- first_int += 1 << nbits
- if first_int - 1 == ip._ALL_ONES:
- break
-
-
-def _collapse_addresses_internal(addresses):
- """Loops through the addresses, collapsing concurrent netblocks.
-
- Example:
-
- ip1 = IPv4Network('192.0.2.0/26')
- ip2 = IPv4Network('192.0.2.64/26')
- ip3 = IPv4Network('192.0.2.128/26')
- ip4 = IPv4Network('192.0.2.192/26')
-
- _collapse_addresses_internal([ip1, ip2, ip3, ip4]) ->
- [IPv4Network('192.0.2.0/24')]
-
- This shouldn't be called directly; it is called via
- collapse_addresses([]).
-
- Args:
- addresses: A list of IPv4Network's or IPv6Network's
-
- Returns:
- A list of IPv4Network's or IPv6Network's depending on what we were
- passed.
-
- """
- # First merge
- to_merge = list(addresses)
- subnets = {}
- while to_merge:
- net = to_merge.pop()
- supernet = net.supernet()
- existing = subnets.get(supernet)
- if existing is None:
- subnets[supernet] = net
- elif existing != net:
- # Merge consecutive subnets
- del subnets[supernet]
- to_merge.append(supernet)
- # Then iterate over resulting networks, skipping subsumed subnets
- last = None
- for net in sorted(subnets.values()):
- if last is not None:
- # Since they are sorted,
- # last.network_address <= net.network_address is a given.
- if last.broadcast_address >= net.broadcast_address:
- continue
- yield net
- last = net
-
-
-def collapse_addresses(addresses):
- """Collapse a list of IP objects.
-
- Example:
- collapse_addresses([IPv4Network('192.0.2.0/25'),
- IPv4Network('192.0.2.128/25')]) ->
- [IPv4Network('192.0.2.0/24')]
-
- Args:
- addresses: An iterator of IPv4Network or IPv6Network objects.
-
- Returns:
- An iterator of the collapsed IPv(4|6)Network objects.
-
- Raises:
- TypeError: If passed a list of mixed version objects.
-
- """
- addrs = []
- ips = []
- nets = []
-
- # split IP addresses and networks
- for ip in addresses:
- if isinstance(ip, _BaseAddress):
- if ips and ips[-1]._version != ip._version:
- raise TypeError("%s and %s are not of the same version" % (
- ip, ips[-1]))
- ips.append(ip)
- elif ip._prefixlen == ip._max_prefixlen:
- if ips and ips[-1]._version != ip._version:
- raise TypeError("%s and %s are not of the same version" % (
- ip, ips[-1]))
- try:
- ips.append(ip.ip)
- except AttributeError:
- ips.append(ip.network_address)
- else:
- if nets and nets[-1]._version != ip._version:
- raise TypeError("%s and %s are not of the same version" % (
- ip, nets[-1]))
- nets.append(ip)
-
- # sort and dedup
- ips = sorted(set(ips))
-
- # find consecutive address ranges in the sorted sequence and summarize them
- if ips:
- for first, last in _find_address_range(ips):
- addrs.extend(summarize_address_range(first, last))
-
- return _collapse_addresses_internal(addrs + nets)
-
-
-def get_mixed_type_key(obj):
- """Return a key suitable for sorting between networks and addresses.
-
- Address and Network objects are not sortable by default; they're
- fundamentally different so the expression
-
- IPv4Address('192.0.2.0') <= IPv4Network('192.0.2.0/24')
-
- doesn't make any sense. There are some times however, where you may wish
- to have ipaddress sort these for you anyway. If you need to do this, you
- can use this function as the key= argument to sorted().
-
- Args:
- obj: either a Network or Address object.
- Returns:
- appropriate key.
-
- """
- if isinstance(obj, _BaseNetwork):
- return obj._get_networks_key()
- elif isinstance(obj, _BaseAddress):
- return obj._get_address_key()
- return NotImplemented
-
-
-class _IPAddressBase(_TotalOrderingMixin):
-
- """The mother class."""
-
- __slots__ = ()
-
- @property
- def exploded(self):
- """Return the longhand version of the IP address as a string."""
- return self._explode_shorthand_ip_string()
-
- @property
- def compressed(self):
- """Return the shorthand version of the IP address as a string."""
- return _compat_str(self)
-
- @property
- def reverse_pointer(self):
- """The name of the reverse DNS pointer for the IP address, e.g.:
- >>> ipaddress.ip_address("127.0.0.1").reverse_pointer
- '1.0.0.127.in-addr.arpa'
- >>> ipaddress.ip_address("2001:db8::1").reverse_pointer
- '1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa'
-
- """
- return self._reverse_pointer()
-
- @property
- def version(self):
- msg = '%200s has no version specified' % (type(self),)
- raise NotImplementedError(msg)
-
- def _check_int_address(self, address):
- if address < 0:
- msg = "%d (< 0) is not permitted as an IPv%d address"
- raise AddressValueError(msg % (address, self._version))
- if address > self._ALL_ONES:
- msg = "%d (>= 2**%d) is not permitted as an IPv%d address"
- raise AddressValueError(msg % (address, self._max_prefixlen,
- self._version))
-
- def _check_packed_address(self, address, expected_len):
- address_len = len(address)
- if address_len != expected_len:
- msg = (
- '%r (len %d != %d) is not permitted as an IPv%d address. '
- 'Did you pass in a bytes (str in Python 2) instead of'
- ' a unicode object?')
- raise AddressValueError(msg % (address, address_len,
- expected_len, self._version))
-
- @classmethod
- def _ip_int_from_prefix(cls, prefixlen):
- """Turn the prefix length into a bitwise netmask
-
- Args:
- prefixlen: An integer, the prefix length.
-
- Returns:
- An integer.
-
- """
- return cls._ALL_ONES ^ (cls._ALL_ONES >> prefixlen)
-
- @classmethod
- def _prefix_from_ip_int(cls, ip_int):
- """Return prefix length from the bitwise netmask.
-
- Args:
- ip_int: An integer, the netmask in expanded bitwise format
-
- Returns:
- An integer, the prefix length.
-
- Raises:
- ValueError: If the input intermingles zeroes & ones
- """
- trailing_zeroes = _count_righthand_zero_bits(ip_int,
- cls._max_prefixlen)
- prefixlen = cls._max_prefixlen - trailing_zeroes
- leading_ones = ip_int >> trailing_zeroes
- all_ones = (1 << prefixlen) - 1
- if leading_ones != all_ones:
- byteslen = cls._max_prefixlen // 8
- details = _compat_to_bytes(ip_int, byteslen, 'big')
- msg = 'Netmask pattern %r mixes zeroes & ones'
- raise ValueError(msg % details)
- return prefixlen
-
- @classmethod
- def _report_invalid_netmask(cls, netmask_str):
- msg = '%r is not a valid netmask' % netmask_str
- raise NetmaskValueError(msg)
-
- @classmethod
- def _prefix_from_prefix_string(cls, prefixlen_str):
- """Return prefix length from a numeric string
-
- Args:
- prefixlen_str: The string to be converted
-
- Returns:
- An integer, the prefix length.
-
- Raises:
- NetmaskValueError: If the input is not a valid netmask
- """
- # int allows a leading +/- as well as surrounding whitespace,
- # so we ensure that isn't the case
- if not _BaseV4._DECIMAL_DIGITS.issuperset(prefixlen_str):
- cls._report_invalid_netmask(prefixlen_str)
- try:
- prefixlen = int(prefixlen_str)
- except ValueError:
- cls._report_invalid_netmask(prefixlen_str)
- if not (0 <= prefixlen <= cls._max_prefixlen):
- cls._report_invalid_netmask(prefixlen_str)
- return prefixlen
-
- @classmethod
- def _prefix_from_ip_string(cls, ip_str):
- """Turn a netmask/hostmask string into a prefix length
-
- Args:
- ip_str: The netmask/hostmask to be converted
-
- Returns:
- An integer, the prefix length.
-
- Raises:
- NetmaskValueError: If the input is not a valid netmask/hostmask
- """
- # Parse the netmask/hostmask like an IP address.
- try:
- ip_int = cls._ip_int_from_string(ip_str)
- except AddressValueError:
- cls._report_invalid_netmask(ip_str)
-
- # Try matching a netmask (this would be /1*0*/ as a bitwise regexp).
- # Note that the two ambiguous cases (all-ones and all-zeroes) are
- # treated as netmasks.
- try:
- return cls._prefix_from_ip_int(ip_int)
- except ValueError:
- pass
-
- # Invert the bits, and try matching a /0+1+/ hostmask instead.
- ip_int ^= cls._ALL_ONES
- try:
- return cls._prefix_from_ip_int(ip_int)
- except ValueError:
- cls._report_invalid_netmask(ip_str)
-
- def __reduce__(self):
- return self.__class__, (_compat_str(self),)
-
-
-class _BaseAddress(_IPAddressBase):
-
- """A generic IP object.
-
- This IP class contains the version independent methods which are
- used by single IP addresses.
- """
-
- __slots__ = ()
-
- def __int__(self):
- return self._ip
-
- def __eq__(self, other):
- try:
- return (self._ip == other._ip and
- self._version == other._version)
- except AttributeError:
- return NotImplemented
-
- def __lt__(self, other):
- if not isinstance(other, _IPAddressBase):
- return NotImplemented
- if not isinstance(other, _BaseAddress):
- raise TypeError('%s and %s are not of the same type' % (
- self, other))
- if self._version != other._version:
- raise TypeError('%s and %s are not of the same version' % (
- self, other))
- if self._ip != other._ip:
- return self._ip < other._ip
- return False
-
- # Shorthand for Integer addition and subtraction. This is not
- # meant to ever support addition/subtraction of addresses.
- def __add__(self, other):
- if not isinstance(other, _compat_int_types):
- return NotImplemented
- return self.__class__(int(self) + other)
-
- def __sub__(self, other):
- if not isinstance(other, _compat_int_types):
- return NotImplemented
- return self.__class__(int(self) - other)
-
- def __repr__(self):
- return '%s(%r)' % (self.__class__.__name__, _compat_str(self))
-
- def __str__(self):
- return _compat_str(self._string_from_ip_int(self._ip))
-
- def __hash__(self):
- return hash(hex(int(self._ip)))
-
- def _get_address_key(self):
- return (self._version, self)
-
- def __reduce__(self):
- return self.__class__, (self._ip,)
-
-
-class _BaseNetwork(_IPAddressBase):
-
- """A generic IP network object.
-
- This IP class contains the version independent methods which are
- used by networks.
-
- """
- def __init__(self, address):
- self._cache = {}
-
- def __repr__(self):
- return '%s(%r)' % (self.__class__.__name__, _compat_str(self))
-
- def __str__(self):
- return '%s/%d' % (self.network_address, self.prefixlen)
-
- def hosts(self):
- """Generate Iterator over usable hosts in a network.
-
- This is like __iter__ except it doesn't return the network
- or broadcast addresses.
-
- """
- network = int(self.network_address)
- broadcast = int(self.broadcast_address)
- for x in _compat_range(network + 1, broadcast):
- yield self._address_class(x)
-
- def __iter__(self):
- network = int(self.network_address)
- broadcast = int(self.broadcast_address)
- for x in _compat_range(network, broadcast + 1):
- yield self._address_class(x)
-
- def __getitem__(self, n):
- network = int(self.network_address)
- broadcast = int(self.broadcast_address)
- if n >= 0:
- if network + n > broadcast:
- raise IndexError('address out of range')
- return self._address_class(network + n)
- else:
- n += 1
- if broadcast + n < network:
- raise IndexError('address out of range')
- return self._address_class(broadcast + n)
-
- def __lt__(self, other):
- if not isinstance(other, _IPAddressBase):
- return NotImplemented
- if not isinstance(other, _BaseNetwork):
- raise TypeError('%s and %s are not of the same type' % (
- self, other))
- if self._version != other._version:
- raise TypeError('%s and %s are not of the same version' % (
- self, other))
- if self.network_address != other.network_address:
- return self.network_address < other.network_address
- if self.netmask != other.netmask:
- return self.netmask < other.netmask
- return False
-
- def __eq__(self, other):
- try:
- return (self._version == other._version and
- self.network_address == other.network_address and
- int(self.netmask) == int(other.netmask))
- except AttributeError:
- return NotImplemented
-
- def __hash__(self):
- return hash(int(self.network_address) ^ int(self.netmask))
-
- def __contains__(self, other):
- # always false if one is v4 and the other is v6.
- if self._version != other._version:
- return False
- # dealing with another network.
- if isinstance(other, _BaseNetwork):
- return False
- # dealing with another address
- else:
- # address
- return (int(self.network_address) <= int(other._ip) <=
- int(self.broadcast_address))
-
- def overlaps(self, other):
- """Tell if self is partly contained in other."""
- return self.network_address in other or (
- self.broadcast_address in other or (
- other.network_address in self or (
- other.broadcast_address in self)))
-
- @property
- def broadcast_address(self):
- x = self._cache.get('broadcast_address')
- if x is None:
- x = self._address_class(int(self.network_address) |
- int(self.hostmask))
- self._cache['broadcast_address'] = x
- return x
-
- @property
- def hostmask(self):
- x = self._cache.get('hostmask')
- if x is None:
- x = self._address_class(int(self.netmask) ^ self._ALL_ONES)
- self._cache['hostmask'] = x
- return x
-
- @property
- def with_prefixlen(self):
- return '%s/%d' % (self.network_address, self._prefixlen)
-
- @property
- def with_netmask(self):
- return '%s/%s' % (self.network_address, self.netmask)
-
- @property
- def with_hostmask(self):
- return '%s/%s' % (self.network_address, self.hostmask)
-
- @property
- def num_addresses(self):
- """Number of hosts in the current subnet."""
- return int(self.broadcast_address) - int(self.network_address) + 1
-
- @property
- def _address_class(self):
- # Returning bare address objects (rather than interfaces) allows for
- # more consistent behaviour across the network address, broadcast
- # address and individual host addresses.
- msg = '%200s has no associated address class' % (type(self),)
- raise NotImplementedError(msg)
-
- @property
- def prefixlen(self):
- return self._prefixlen
-
- def address_exclude(self, other):
- """Remove an address from a larger block.
-
- For example:
-
- addr1 = ip_network('192.0.2.0/28')
- addr2 = ip_network('192.0.2.1/32')
- list(addr1.address_exclude(addr2)) =
- [IPv4Network('192.0.2.0/32'), IPv4Network('192.0.2.2/31'),
- IPv4Network('192.0.2.4/30'), IPv4Network('192.0.2.8/29')]
-
- or IPv6:
-
- addr1 = ip_network('2001:db8::1/32')
- addr2 = ip_network('2001:db8::1/128')
- list(addr1.address_exclude(addr2)) =
- [ip_network('2001:db8::1/128'),
- ip_network('2001:db8::2/127'),
- ip_network('2001:db8::4/126'),
- ip_network('2001:db8::8/125'),
- ...
- ip_network('2001:db8:8000::/33')]
-
- Args:
- other: An IPv4Network or IPv6Network object of the same type.
-
- Returns:
- An iterator of the IPv(4|6)Network objects which is self
- minus other.
-
- Raises:
- TypeError: If self and other are of differing address
- versions, or if other is not a network object.
- ValueError: If other is not completely contained by self.
-
- """
- if not self._version == other._version:
- raise TypeError("%s and %s are not of the same version" % (
- self, other))
-
- if not isinstance(other, _BaseNetwork):
- raise TypeError("%s is not a network object" % other)
-
- if not other.subnet_of(self):
- raise ValueError('%s not contained in %s' % (other, self))
- if other == self:
- return
-
- # Make sure we're comparing the network of other.
- other = other.__class__('%s/%s' % (other.network_address,
- other.prefixlen))
-
- s1, s2 = self.subnets()
- while s1 != other and s2 != other:
- if other.subnet_of(s1):
- yield s2
- s1, s2 = s1.subnets()
- elif other.subnet_of(s2):
- yield s1
- s1, s2 = s2.subnets()
- else:
- # If we got here, there's a bug somewhere.
- raise AssertionError('Error performing exclusion: '
- 's1: %s s2: %s other: %s' %
- (s1, s2, other))
- if s1 == other:
- yield s2
- elif s2 == other:
- yield s1
- else:
- # If we got here, there's a bug somewhere.
- raise AssertionError('Error performing exclusion: '
- 's1: %s s2: %s other: %s' %
- (s1, s2, other))
-
- def compare_networks(self, other):
- """Compare two IP objects.
-
- This is only concerned about the comparison of the integer
- representation of the network addresses. This means that the
- host bits aren't considered at all in this method. If you want
- to compare host bits, you can easily enough do a
- 'HostA._ip < HostB._ip'
-
- Args:
- other: An IP object.
-
- Returns:
- If the IP versions of self and other are the same, returns:
-
- -1 if self < other:
- eg: IPv4Network('192.0.2.0/25') < IPv4Network('192.0.2.128/25')
- IPv6Network('2001:db8::1000/124') <
- IPv6Network('2001:db8::2000/124')
- 0 if self == other
- eg: IPv4Network('192.0.2.0/24') == IPv4Network('192.0.2.0/24')
- IPv6Network('2001:db8::1000/124') ==
- IPv6Network('2001:db8::1000/124')
- 1 if self > other
- eg: IPv4Network('192.0.2.128/25') > IPv4Network('192.0.2.0/25')
- IPv6Network('2001:db8::2000/124') >
- IPv6Network('2001:db8::1000/124')
-
- Raises:
- TypeError if the IP versions are different.
-
- """
- # does this need to raise a ValueError?
- if self._version != other._version:
- raise TypeError('%s and %s are not of the same type' % (
- self, other))
- # self._version == other._version below here:
- if self.network_address < other.network_address:
- return -1
- if self.network_address > other.network_address:
- return 1
- # self.network_address == other.network_address below here:
- if self.netmask < other.netmask:
- return -1
- if self.netmask > other.netmask:
- return 1
- return 0
-
- def _get_networks_key(self):
- """Network-only key function.
-
- Returns an object that identifies this address' network and
- netmask. This function is a suitable "key" argument for sorted()
- and list.sort().
-
- """
- return (self._version, self.network_address, self.netmask)
-
- def subnets(self, prefixlen_diff=1, new_prefix=None):
- """The subnets which join to make the current subnet.
-
- In the case that self contains only one IP
- (self._prefixlen == 32 for IPv4 or self._prefixlen == 128
- for IPv6), yield an iterator with just ourself.
-
- Args:
- prefixlen_diff: An integer, the amount the prefix length
- should be increased by. This should not be set if
- new_prefix is also set.
- new_prefix: The desired new prefix length. This must be a
- larger number (smaller prefix) than the existing prefix.
- This should not be set if prefixlen_diff is also set.
-
- Returns:
- An iterator of IPv(4|6) objects.
-
- Raises:
- ValueError: The prefixlen_diff is too small or too large.
- OR
- prefixlen_diff and new_prefix are both set or new_prefix
- is a smaller number than the current prefix (smaller
- number means a larger network)
-
- """
- if self._prefixlen == self._max_prefixlen:
- yield self
- return
-
- if new_prefix is not None:
- if new_prefix < self._prefixlen:
- raise ValueError('new prefix must be longer')
- if prefixlen_diff != 1:
- raise ValueError('cannot set prefixlen_diff and new_prefix')
- prefixlen_diff = new_prefix - self._prefixlen
-
- if prefixlen_diff < 0:
- raise ValueError('prefix length diff must be > 0')
- new_prefixlen = self._prefixlen + prefixlen_diff
-
- if new_prefixlen > self._max_prefixlen:
- raise ValueError(
- 'prefix length diff %d is invalid for netblock %s' % (
- new_prefixlen, self))
-
- start = int(self.network_address)
- end = int(self.broadcast_address) + 1
- step = (int(self.hostmask) + 1) >> prefixlen_diff
- for new_addr in _compat_range(start, end, step):
- current = self.__class__((new_addr, new_prefixlen))
- yield current
-
- def supernet(self, prefixlen_diff=1, new_prefix=None):
- """The supernet containing the current network.
-
- Args:
- prefixlen_diff: An integer, the amount the prefix length of
- the network should be decreased by. For example, given a
- /24 network and a prefixlen_diff of 3, a supernet with a
- /21 netmask is returned.
-
- Returns:
- An IPv4 network object.
-
- Raises:
- ValueError: If self.prefixlen - prefixlen_diff < 0. I.e., you have
- a negative prefix length.
- OR
- If prefixlen_diff and new_prefix are both set or new_prefix is a
- larger number than the current prefix (larger number means a
- smaller network)
-
- """
- if self._prefixlen == 0:
- return self
-
- if new_prefix is not None:
- if new_prefix > self._prefixlen:
- raise ValueError('new prefix must be shorter')
- if prefixlen_diff != 1:
- raise ValueError('cannot set prefixlen_diff and new_prefix')
- prefixlen_diff = self._prefixlen - new_prefix
-
- new_prefixlen = self.prefixlen - prefixlen_diff
- if new_prefixlen < 0:
- raise ValueError(
- 'current prefixlen is %d, cannot have a prefixlen_diff of %d' %
- (self.prefixlen, prefixlen_diff))
- return self.__class__((
- int(self.network_address) & (int(self.netmask) << prefixlen_diff),
- new_prefixlen))
-
- @property
- def is_multicast(self):
- """Test if the address is reserved for multicast use.
-
- Returns:
- A boolean, True if the address is a multicast address.
- See RFC 2373 2.7 for details.
-
- """
- return (self.network_address.is_multicast and
- self.broadcast_address.is_multicast)
-
- @staticmethod
- def _is_subnet_of(a, b):
- try:
- # Always false if one is v4 and the other is v6.
- if a._version != b._version:
- raise TypeError("%s and %s are not of the same version" % (a, b))
- return (b.network_address <= a.network_address and
- b.broadcast_address >= a.broadcast_address)
- except AttributeError:
- raise TypeError("Unable to test subnet containment "
- "between %s and %s" % (a, b))
-
- def subnet_of(self, other):
- """Return True if this network is a subnet of other."""
- return self._is_subnet_of(self, other)
-
- def supernet_of(self, other):
- """Return True if this network is a supernet of other."""
- return self._is_subnet_of(other, self)
-
- @property
- def is_reserved(self):
- """Test if the address is otherwise IETF reserved.
-
- Returns:
- A boolean, True if the address is within one of the
- reserved IPv6 Network ranges.
-
- """
- return (self.network_address.is_reserved and
- self.broadcast_address.is_reserved)
-
- @property
- def is_link_local(self):
- """Test if the address is reserved for link-local.
-
- Returns:
- A boolean, True if the address is reserved per RFC 4291.
-
- """
- return (self.network_address.is_link_local and
- self.broadcast_address.is_link_local)
-
- @property
- def is_private(self):
- """Test if this address is allocated for private networks.
-
- Returns:
- A boolean, True if the address is reserved per
- iana-ipv4-special-registry or iana-ipv6-special-registry.
-
- """
- return (self.network_address.is_private and
- self.broadcast_address.is_private)
-
- @property
- def is_global(self):
- """Test if this address is allocated for public networks.
-
- Returns:
- A boolean, True if the address is not reserved per
- iana-ipv4-special-registry or iana-ipv6-special-registry.
-
- """
- return not self.is_private
-
- @property
- def is_unspecified(self):
- """Test if the address is unspecified.
-
- Returns:
- A boolean, True if this is the unspecified address as defined in
- RFC 2373 2.5.2.
-
- """
- return (self.network_address.is_unspecified and
- self.broadcast_address.is_unspecified)
-
- @property
- def is_loopback(self):
- """Test if the address is a loopback address.
-
- Returns:
- A boolean, True if the address is a loopback address as defined in
- RFC 2373 2.5.3.
-
- """
- return (self.network_address.is_loopback and
- self.broadcast_address.is_loopback)
-
-
-class _BaseV4(object):
-
- """Base IPv4 object.
-
- The following methods are used by IPv4 objects in both single IP
- addresses and networks.
-
- """
-
- __slots__ = ()
- _version = 4
- # Equivalent to 255.255.255.255 or 32 bits of 1's.
- _ALL_ONES = (2 ** IPV4LENGTH) - 1
- _DECIMAL_DIGITS = frozenset('0123456789')
-
- # the valid octets for host and netmasks. only useful for IPv4.
- _valid_mask_octets = frozenset([255, 254, 252, 248, 240, 224, 192, 128, 0])
-
- _max_prefixlen = IPV4LENGTH
- # There are only a handful of valid v4 netmasks, so we cache them all
- # when constructed (see _make_netmask()).
- _netmask_cache = {}
-
- def _explode_shorthand_ip_string(self):
- return _compat_str(self)
-
- @classmethod
- def _make_netmask(cls, arg):
- """Make a (netmask, prefix_len) tuple from the given argument.
-
- Argument can be:
- - an integer (the prefix length)
- - a string representing the prefix length (e.g. "24")
- - a string representing the prefix netmask (e.g. "255.255.255.0")
- """
- if arg not in cls._netmask_cache:
- if isinstance(arg, _compat_int_types):
- prefixlen = arg
- else:
- try:
- # Check for a netmask in prefix length form
- prefixlen = cls._prefix_from_prefix_string(arg)
- except NetmaskValueError:
- # Check for a netmask or hostmask in dotted-quad form.
- # This may raise NetmaskValueError.
- prefixlen = cls._prefix_from_ip_string(arg)
- netmask = IPv4Address(cls._ip_int_from_prefix(prefixlen))
- cls._netmask_cache[arg] = netmask, prefixlen
- return cls._netmask_cache[arg]
-
- @classmethod
- def _ip_int_from_string(cls, ip_str):
- """Turn the given IP string into an integer for comparison.
-
- Args:
- ip_str: A string, the IP ip_str.
-
- Returns:
- The IP ip_str as an integer.
-
- Raises:
- AddressValueError: if ip_str isn't a valid IPv4 Address.
-
- """
- if not ip_str:
- raise AddressValueError('Address cannot be empty')
-
- octets = ip_str.split('.')
- if len(octets) != 4:
- raise AddressValueError("Expected 4 octets in %r" % ip_str)
-
- try:
- return _compat_int_from_byte_vals(
- map(cls._parse_octet, octets), 'big')
- except ValueError as exc:
- raise AddressValueError("%s in %r" % (exc, ip_str))
-
- @classmethod
- def _parse_octet(cls, octet_str):
- """Convert a decimal octet into an integer.
-
- Args:
- octet_str: A string, the number to parse.
-
- Returns:
- The octet as an integer.
-
- Raises:
- ValueError: if the octet isn't strictly a decimal from [0..255].
-
- """
- if not octet_str:
- raise ValueError("Empty octet not permitted")
- # Whitelist the characters, since int() allows a lot of bizarre stuff.
- if not cls._DECIMAL_DIGITS.issuperset(octet_str):
- msg = "Only decimal digits permitted in %r"
- raise ValueError(msg % octet_str)
- # We do the length check second, since the invalid character error
- # is likely to be more informative for the user
- if len(octet_str) > 3:
- msg = "At most 3 characters permitted in %r"
- raise ValueError(msg % octet_str)
- # Convert to integer (we know digits are legal)
- octet_int = int(octet_str, 10)
- # Any octets that look like they *might* be written in octal,
- # and which don't look exactly the same in both octal and
- # decimal are rejected as ambiguous
- if octet_int > 7 and octet_str[0] == '0':
- msg = "Ambiguous (octal/decimal) value in %r not permitted"
- raise ValueError(msg % octet_str)
- if octet_int > 255:
- raise ValueError("Octet %d (> 255) not permitted" % octet_int)
- return octet_int
-
- @classmethod
- def _string_from_ip_int(cls, ip_int):
- """Turns a 32-bit integer into dotted decimal notation.
-
- Args:
- ip_int: An integer, the IP address.
-
- Returns:
- The IP address as a string in dotted decimal notation.
-
- """
- return '.'.join(_compat_str(struct.unpack(b'!B', b)[0]
- if isinstance(b, bytes)
- else b)
- for b in _compat_to_bytes(ip_int, 4, 'big'))
-
- def _is_hostmask(self, ip_str):
- """Test if the IP string is a hostmask (rather than a netmask).
-
- Args:
- ip_str: A string, the potential hostmask.
-
- Returns:
- A boolean, True if the IP string is a hostmask.
-
- """
- bits = ip_str.split('.')
- try:
- parts = [x for x in map(int, bits) if x in self._valid_mask_octets]
- except ValueError:
- return False
- if len(parts) != len(bits):
- return False
- if parts[0] < parts[-1]:
- return True
- return False
-
- def _reverse_pointer(self):
- """Return the reverse DNS pointer name for the IPv4 address.
-
- This implements the method described in RFC1035 3.5.
-
- """
- reverse_octets = _compat_str(self).split('.')[::-1]
- return '.'.join(reverse_octets) + '.in-addr.arpa'
-
- @property
- def max_prefixlen(self):
- return self._max_prefixlen
-
- @property
- def version(self):
- return self._version
-
-
-class IPv4Address(_BaseV4, _BaseAddress):
-
- """Represent and manipulate single IPv4 Addresses."""
-
- __slots__ = ('_ip', '__weakref__')
-
- def __init__(self, address):
-
- """
- Args:
- address: A string or integer representing the IP
-
- Additionally, an integer can be passed, so
- IPv4Address('192.0.2.1') == IPv4Address(3221225985).
- or, more generally
- IPv4Address(int(IPv4Address('192.0.2.1'))) ==
- IPv4Address('192.0.2.1')
-
- Raises:
- AddressValueError: If ipaddress isn't a valid IPv4 address.
-
- """
- # Efficient constructor from integer.
- if isinstance(address, _compat_int_types):
- self._check_int_address(address)
- self._ip = address
- return
-
- # Constructing from a packed address
- if isinstance(address, bytes):
- self._check_packed_address(address, 4)
- bvs = _compat_bytes_to_byte_vals(address)
- self._ip = _compat_int_from_byte_vals(bvs, 'big')
- return
-
- # Assume input argument to be string or any object representation
- # which converts into a formatted IP string.
- addr_str = _compat_str(address)
- if '/' in addr_str:
- raise AddressValueError("Unexpected '/' in %r" % address)
- self._ip = self._ip_int_from_string(addr_str)
-
- @property
- def packed(self):
- """The binary representation of this address."""
- return v4_int_to_packed(self._ip)
-
- @property
- def is_reserved(self):
- """Test if the address is otherwise IETF reserved.
-
- Returns:
- A boolean, True if the address is within the
- reserved IPv4 Network range.
-
- """
- return self in self._constants._reserved_network
-
- @property
- def is_private(self):
- """Test if this address is allocated for private networks.
-
- Returns:
- A boolean, True if the address is reserved per
- iana-ipv4-special-registry.
-
- """
- return any(self in net for net in self._constants._private_networks)
-
- @property
- def is_global(self):
- return (
- self not in self._constants._public_network and
- not self.is_private)
-
- @property
- def is_multicast(self):
- """Test if the address is reserved for multicast use.
-
- Returns:
- A boolean, True if the address is multicast.
- See RFC 3171 for details.
-
- """
- return self in self._constants._multicast_network
-
- @property
- def is_unspecified(self):
- """Test if the address is unspecified.
-
- Returns:
- A boolean, True if this is the unspecified address as defined in
- RFC 5735 3.
-
- """
- return self == self._constants._unspecified_address
-
- @property
- def is_loopback(self):
- """Test if the address is a loopback address.
-
- Returns:
- A boolean, True if the address is a loopback per RFC 3330.
-
- """
- return self in self._constants._loopback_network
-
- @property
- def is_link_local(self):
- """Test if the address is reserved for link-local.
-
- Returns:
- A boolean, True if the address is link-local per RFC 3927.
-
- """
- return self in self._constants._linklocal_network
-
-
-class IPv4Interface(IPv4Address):
-
- def __init__(self, address):
- if isinstance(address, (bytes, _compat_int_types)):
- IPv4Address.__init__(self, address)
- self.network = IPv4Network(self._ip)
- self._prefixlen = self._max_prefixlen
- return
-
- if isinstance(address, tuple):
- IPv4Address.__init__(self, address[0])
- if len(address) > 1:
- self._prefixlen = int(address[1])
- else:
- self._prefixlen = self._max_prefixlen
-
- self.network = IPv4Network(address, strict=False)
- self.netmask = self.network.netmask
- self.hostmask = self.network.hostmask
- return
-
- addr = _split_optional_netmask(address)
- IPv4Address.__init__(self, addr[0])
-
- self.network = IPv4Network(address, strict=False)
- self._prefixlen = self.network._prefixlen
-
- self.netmask = self.network.netmask
- self.hostmask = self.network.hostmask
-
- def __str__(self):
- return '%s/%d' % (self._string_from_ip_int(self._ip),
- self.network.prefixlen)
-
- def __eq__(self, other):
- address_equal = IPv4Address.__eq__(self, other)
- if not address_equal or address_equal is NotImplemented:
- return address_equal
- try:
- return self.network == other.network
- except AttributeError:
- # An interface with an associated network is NOT the
- # same as an unassociated address. That's why the hash
- # takes the extra info into account.
- return False
-
- def __lt__(self, other):
- address_less = IPv4Address.__lt__(self, other)
- if address_less is NotImplemented:
- return NotImplemented
- try:
- return (self.network < other.network or
- self.network == other.network and address_less)
- except AttributeError:
- # We *do* allow addresses and interfaces to be sorted. The
- # unassociated address is considered less than all interfaces.
- return False
-
- def __hash__(self):
- return self._ip ^ self._prefixlen ^ int(self.network.network_address)
-
- __reduce__ = _IPAddressBase.__reduce__
-
- @property
- def ip(self):
- return IPv4Address(self._ip)
-
- @property
- def with_prefixlen(self):
- return '%s/%s' % (self._string_from_ip_int(self._ip),
- self._prefixlen)
-
- @property
- def with_netmask(self):
- return '%s/%s' % (self._string_from_ip_int(self._ip),
- self.netmask)
-
- @property
- def with_hostmask(self):
- return '%s/%s' % (self._string_from_ip_int(self._ip),
- self.hostmask)
-
-
-class IPv4Network(_BaseV4, _BaseNetwork):
-
- """This class represents and manipulates 32-bit IPv4 network + addresses..
-
- Attributes: [examples for IPv4Network('192.0.2.0/27')]
- .network_address: IPv4Address('192.0.2.0')
- .hostmask: IPv4Address('0.0.0.31')
- .broadcast_address: IPv4Address('192.0.2.32')
- .netmask: IPv4Address('255.255.255.224')
- .prefixlen: 27
-
- """
- # Class to use when creating address objects
- _address_class = IPv4Address
-
- def __init__(self, address, strict=True):
-
- """Instantiate a new IPv4 network object.
-
- Args:
- address: A string or integer representing the IP [& network].
- '192.0.2.0/24'
- '192.0.2.0/255.255.255.0'
- '192.0.0.2/0.0.0.255'
- are all functionally the same in IPv4. Similarly,
- '192.0.2.1'
- '192.0.2.1/255.255.255.255'
- '192.0.2.1/32'
- are also functionally equivalent. That is to say, failing to
- provide a subnetmask will create an object with a mask of /32.
-
- If the mask (portion after the / in the argument) is given in
- dotted quad form, it is treated as a netmask if it starts with a
- non-zero field (e.g. /255.0.0.0 == /8) and as a hostmask if it
- starts with a zero field (e.g. 0.255.255.255 == /8), with the
- single exception of an all-zero mask which is treated as a
- netmask == /0. If no mask is given, a default of /32 is used.
-
- Additionally, an integer can be passed, so
- IPv4Network('192.0.2.1') == IPv4Network(3221225985)
- or, more generally
- IPv4Interface(int(IPv4Interface('192.0.2.1'))) ==
- IPv4Interface('192.0.2.1')
-
- Raises:
- AddressValueError: If ipaddress isn't a valid IPv4 address.
- NetmaskValueError: If the netmask isn't valid for
- an IPv4 address.
- ValueError: If strict is True and a network address is not
- supplied.
-
- """
- _BaseNetwork.__init__(self, address)
-
- # Constructing from a packed address or integer
- if isinstance(address, (_compat_int_types, bytes)):
- self.network_address = IPv4Address(address)
- self.netmask, self._prefixlen = self._make_netmask(
- self._max_prefixlen)
- # fixme: address/network test here.
- return
-
- if isinstance(address, tuple):
- if len(address) > 1:
- arg = address[1]
- else:
- # We weren't given an address[1]
- arg = self._max_prefixlen
- self.network_address = IPv4Address(address[0])
- self.netmask, self._prefixlen = self._make_netmask(arg)
- packed = int(self.network_address)
- if packed & int(self.netmask) != packed:
- if strict:
- raise ValueError('%s has host bits set' % self)
- else:
- self.network_address = IPv4Address(packed &
- int(self.netmask))
- return
-
- # Assume input argument to be string or any object representation
- # which converts into a formatted IP prefix string.
- addr = _split_optional_netmask(address)
- self.network_address = IPv4Address(self._ip_int_from_string(addr[0]))
-
- if len(addr) == 2:
- arg = addr[1]
- else:
- arg = self._max_prefixlen
- self.netmask, self._prefixlen = self._make_netmask(arg)
-
- if strict:
- if (IPv4Address(int(self.network_address) & int(self.netmask)) !=
- self.network_address):
- raise ValueError('%s has host bits set' % self)
- self.network_address = IPv4Address(int(self.network_address) &
- int(self.netmask))
-
- if self._prefixlen == (self._max_prefixlen - 1):
- self.hosts = self.__iter__
-
- @property
- def is_global(self):
- """Test if this address is allocated for public networks.
-
- Returns:
- A boolean, True if the address is not reserved per
- iana-ipv4-special-registry.
-
- """
- return (not (self.network_address in IPv4Network('100.64.0.0/10') and
- self.broadcast_address in IPv4Network('100.64.0.0/10')) and
- not self.is_private)
-
-
-class _IPv4Constants(object):
-
- _linklocal_network = IPv4Network('169.254.0.0/16')
-
- _loopback_network = IPv4Network('127.0.0.0/8')
-
- _multicast_network = IPv4Network('224.0.0.0/4')
-
- _public_network = IPv4Network('100.64.0.0/10')
-
- _private_networks = [
- IPv4Network('0.0.0.0/8'),
- IPv4Network('10.0.0.0/8'),
- IPv4Network('127.0.0.0/8'),
- IPv4Network('169.254.0.0/16'),
- IPv4Network('172.16.0.0/12'),
- IPv4Network('192.0.0.0/29'),
- IPv4Network('192.0.0.170/31'),
- IPv4Network('192.0.2.0/24'),
- IPv4Network('192.168.0.0/16'),
- IPv4Network('198.18.0.0/15'),
- IPv4Network('198.51.100.0/24'),
- IPv4Network('203.0.113.0/24'),
- IPv4Network('240.0.0.0/4'),
- IPv4Network('255.255.255.255/32'),
- ]
-
- _reserved_network = IPv4Network('240.0.0.0/4')
-
- _unspecified_address = IPv4Address('0.0.0.0')
-
-
-IPv4Address._constants = _IPv4Constants
-
-
-class _BaseV6(object):
-
- """Base IPv6 object.
-
- The following methods are used by IPv6 objects in both single IP
- addresses and networks.
-
- """
-
- __slots__ = ()
- _version = 6
- _ALL_ONES = (2 ** IPV6LENGTH) - 1
- _HEXTET_COUNT = 8
- _HEX_DIGITS = frozenset('0123456789ABCDEFabcdef')
- _max_prefixlen = IPV6LENGTH
-
- # There are only a bunch of valid v6 netmasks, so we cache them all
- # when constructed (see _make_netmask()).
- _netmask_cache = {}
-
- @classmethod
- def _make_netmask(cls, arg):
- """Make a (netmask, prefix_len) tuple from the given argument.
-
- Argument can be:
- - an integer (the prefix length)
- - a string representing the prefix length (e.g. "24")
- - a string representing the prefix netmask (e.g. "255.255.255.0")
- """
- if arg not in cls._netmask_cache:
- if isinstance(arg, _compat_int_types):
- prefixlen = arg
- else:
- prefixlen = cls._prefix_from_prefix_string(arg)
- netmask = IPv6Address(cls._ip_int_from_prefix(prefixlen))
- cls._netmask_cache[arg] = netmask, prefixlen
- return cls._netmask_cache[arg]
-
- @classmethod
- def _ip_int_from_string(cls, ip_str):
- """Turn an IPv6 ip_str into an integer.
-
- Args:
- ip_str: A string, the IPv6 ip_str.
-
- Returns:
- An int, the IPv6 address
-
- Raises:
- AddressValueError: if ip_str isn't a valid IPv6 Address.
-
- """
- if not ip_str:
- raise AddressValueError('Address cannot be empty')
-
- parts = ip_str.split(':')
-
- # An IPv6 address needs at least 2 colons (3 parts).
- _min_parts = 3
- if len(parts) < _min_parts:
- msg = "At least %d parts expected in %r" % (_min_parts, ip_str)
- raise AddressValueError(msg)
-
- # If the address has an IPv4-style suffix, convert it to hexadecimal.
- if '.' in parts[-1]:
- try:
- ipv4_int = IPv4Address(parts.pop())._ip
- except AddressValueError as exc:
- raise AddressValueError("%s in %r" % (exc, ip_str))
- parts.append('%x' % ((ipv4_int >> 16) & 0xFFFF))
- parts.append('%x' % (ipv4_int & 0xFFFF))
-
- # An IPv6 address can't have more than 8 colons (9 parts).
- # The extra colon comes from using the "::" notation for a single
- # leading or trailing zero part.
- _max_parts = cls._HEXTET_COUNT + 1
- if len(parts) > _max_parts:
- msg = "At most %d colons permitted in %r" % (
- _max_parts - 1, ip_str)
- raise AddressValueError(msg)
-
- # Disregarding the endpoints, find '::' with nothing in between.
- # This indicates that a run of zeroes has been skipped.
- skip_index = None
- for i in _compat_range(1, len(parts) - 1):
- if not parts[i]:
- if skip_index is not None:
- # Can't have more than one '::'
- msg = "At most one '::' permitted in %r" % ip_str
- raise AddressValueError(msg)
- skip_index = i
-
- # parts_hi is the number of parts to copy from above/before the '::'
- # parts_lo is the number of parts to copy from below/after the '::'
- if skip_index is not None:
- # If we found a '::', then check if it also covers the endpoints.
- parts_hi = skip_index
- parts_lo = len(parts) - skip_index - 1
- if not parts[0]:
- parts_hi -= 1
- if parts_hi:
- msg = "Leading ':' only permitted as part of '::' in %r"
- raise AddressValueError(msg % ip_str) # ^: requires ^::
- if not parts[-1]:
- parts_lo -= 1
- if parts_lo:
- msg = "Trailing ':' only permitted as part of '::' in %r"
- raise AddressValueError(msg % ip_str) # :$ requires ::$
- parts_skipped = cls._HEXTET_COUNT - (parts_hi + parts_lo)
- if parts_skipped < 1:
- msg = "Expected at most %d other parts with '::' in %r"
- raise AddressValueError(msg % (cls._HEXTET_COUNT - 1, ip_str))
- else:
- # Otherwise, allocate the entire address to parts_hi. The
- # endpoints could still be empty, but _parse_hextet() will check
- # for that.
- if len(parts) != cls._HEXTET_COUNT:
- msg = "Exactly %d parts expected without '::' in %r"
- raise AddressValueError(msg % (cls._HEXTET_COUNT, ip_str))
- if not parts[0]:
- msg = "Leading ':' only permitted as part of '::' in %r"
- raise AddressValueError(msg % ip_str) # ^: requires ^::
- if not parts[-1]:
- msg = "Trailing ':' only permitted as part of '::' in %r"
- raise AddressValueError(msg % ip_str) # :$ requires ::$
- parts_hi = len(parts)
- parts_lo = 0
- parts_skipped = 0
-
- try:
- # Now, parse the hextets into a 128-bit integer.
- ip_int = 0
- for i in range(parts_hi):
- ip_int <<= 16
- ip_int |= cls._parse_hextet(parts[i])
- ip_int <<= 16 * parts_skipped
- for i in range(-parts_lo, 0):
- ip_int <<= 16
- ip_int |= cls._parse_hextet(parts[i])
- return ip_int
- except ValueError as exc:
- raise AddressValueError("%s in %r" % (exc, ip_str))
-
- @classmethod
- def _parse_hextet(cls, hextet_str):
- """Convert an IPv6 hextet string into an integer.
-
- Args:
- hextet_str: A string, the number to parse.
-
- Returns:
- The hextet as an integer.
-
- Raises:
- ValueError: if the input isn't strictly a hex number from
- [0..FFFF].
-
- """
- # Whitelist the characters, since int() allows a lot of bizarre stuff.
- if not cls._HEX_DIGITS.issuperset(hextet_str):
- raise ValueError("Only hex digits permitted in %r" % hextet_str)
- # We do the length check second, since the invalid character error
- # is likely to be more informative for the user
- if len(hextet_str) > 4:
- msg = "At most 4 characters permitted in %r"
- raise ValueError(msg % hextet_str)
- # Length check means we can skip checking the integer value
- return int(hextet_str, 16)
-
- @classmethod
- def _compress_hextets(cls, hextets):
- """Compresses a list of hextets.
-
- Compresses a list of strings, replacing the longest continuous
- sequence of "0" in the list with "" and adding empty strings at
- the beginning or at the end of the string such that subsequently
- calling ":".join(hextets) will produce the compressed version of
- the IPv6 address.
-
- Args:
- hextets: A list of strings, the hextets to compress.
-
- Returns:
- A list of strings.
-
- """
- best_doublecolon_start = -1
- best_doublecolon_len = 0
- doublecolon_start = -1
- doublecolon_len = 0
- for index, hextet in enumerate(hextets):
- if hextet == '0':
- doublecolon_len += 1
- if doublecolon_start == -1:
- # Start of a sequence of zeros.
- doublecolon_start = index
- if doublecolon_len > best_doublecolon_len:
- # This is the longest sequence of zeros so far.
- best_doublecolon_len = doublecolon_len
- best_doublecolon_start = doublecolon_start
- else:
- doublecolon_len = 0
- doublecolon_start = -1
-
- if best_doublecolon_len > 1:
- best_doublecolon_end = (best_doublecolon_start +
- best_doublecolon_len)
- # For zeros at the end of the address.
- if best_doublecolon_end == len(hextets):
- hextets += ['']
- hextets[best_doublecolon_start:best_doublecolon_end] = ['']
- # For zeros at the beginning of the address.
- if best_doublecolon_start == 0:
- hextets = [''] + hextets
-
- return hextets
-
- @classmethod
- def _string_from_ip_int(cls, ip_int=None):
- """Turns a 128-bit integer into hexadecimal notation.
-
- Args:
- ip_int: An integer, the IP address.
-
- Returns:
- A string, the hexadecimal representation of the address.
-
- Raises:
- ValueError: The address is bigger than 128 bits of all ones.
-
- """
- if ip_int is None:
- ip_int = int(cls._ip)
-
- if ip_int > cls._ALL_ONES:
- raise ValueError('IPv6 address is too large')
-
- hex_str = '%032x' % ip_int
- hextets = ['%x' % int(hex_str[x:x + 4], 16) for x in range(0, 32, 4)]
-
- hextets = cls._compress_hextets(hextets)
- return ':'.join(hextets)
-
- def _explode_shorthand_ip_string(self):
- """Expand a shortened IPv6 address.
-
- Args:
- ip_str: A string, the IPv6 address.
-
- Returns:
- A string, the expanded IPv6 address.
-
- """
- if isinstance(self, IPv6Network):
- ip_str = _compat_str(self.network_address)
- elif isinstance(self, IPv6Interface):
- ip_str = _compat_str(self.ip)
- else:
- ip_str = _compat_str(self)
-
- ip_int = self._ip_int_from_string(ip_str)
- hex_str = '%032x' % ip_int
- parts = [hex_str[x:x + 4] for x in range(0, 32, 4)]
- if isinstance(self, (_BaseNetwork, IPv6Interface)):
- return '%s/%d' % (':'.join(parts), self._prefixlen)
- return ':'.join(parts)
-
- def _reverse_pointer(self):
- """Return the reverse DNS pointer name for the IPv6 address.
-
- This implements the method described in RFC3596 2.5.
-
- """
- reverse_chars = self.exploded[::-1].replace(':', '')
- return '.'.join(reverse_chars) + '.ip6.arpa'
-
- @property
- def max_prefixlen(self):
- return self._max_prefixlen
-
- @property
- def version(self):
- return self._version
-
-
-class IPv6Address(_BaseV6, _BaseAddress):
-
- """Represent and manipulate single IPv6 Addresses."""
-
- __slots__ = ('_ip', '__weakref__')
-
- def __init__(self, address):
- """Instantiate a new IPv6 address object.
-
- Args:
- address: A string or integer representing the IP
-
- Additionally, an integer can be passed, so
- IPv6Address('2001:db8::') ==
- IPv6Address(42540766411282592856903984951653826560)
- or, more generally
- IPv6Address(int(IPv6Address('2001:db8::'))) ==
- IPv6Address('2001:db8::')
-
- Raises:
- AddressValueError: If address isn't a valid IPv6 address.
-
- """
- # Efficient constructor from integer.
- if isinstance(address, _compat_int_types):
- self._check_int_address(address)
- self._ip = address
- return
-
- # Constructing from a packed address
- if isinstance(address, bytes):
- self._check_packed_address(address, 16)
- bvs = _compat_bytes_to_byte_vals(address)
- self._ip = _compat_int_from_byte_vals(bvs, 'big')
- return
-
- # Assume input argument to be string or any object representation
- # which converts into a formatted IP string.
- addr_str = _compat_str(address)
- if '/' in addr_str:
- raise AddressValueError("Unexpected '/' in %r" % address)
- self._ip = self._ip_int_from_string(addr_str)
-
- @property
- def packed(self):
- """The binary representation of this address."""
- return v6_int_to_packed(self._ip)
-
- @property
- def is_multicast(self):
- """Test if the address is reserved for multicast use.
-
- Returns:
- A boolean, True if the address is a multicast address.
- See RFC 2373 2.7 for details.
-
- """
- return self in self._constants._multicast_network
-
- @property
- def is_reserved(self):
- """Test if the address is otherwise IETF reserved.
-
- Returns:
- A boolean, True if the address is within one of the
- reserved IPv6 Network ranges.
-
- """
- return any(self in x for x in self._constants._reserved_networks)
-
- @property
- def is_link_local(self):
- """Test if the address is reserved for link-local.
-
- Returns:
- A boolean, True if the address is reserved per RFC 4291.
-
- """
- return self in self._constants._linklocal_network
-
- @property
- def is_site_local(self):
- """Test if the address is reserved for site-local.
-
- Note that the site-local address space has been deprecated by RFC 3879.
- Use is_private to test if this address is in the space of unique local
- addresses as defined by RFC 4193.
-
- Returns:
- A boolean, True if the address is reserved per RFC 3513 2.5.6.
-
- """
- return self in self._constants._sitelocal_network
-
- @property
- def is_private(self):
- """Test if this address is allocated for private networks.
-
- Returns:
- A boolean, True if the address is reserved per
- iana-ipv6-special-registry.
-
- """
- return any(self in net for net in self._constants._private_networks)
-
- @property
- def is_global(self):
- """Test if this address is allocated for public networks.
-
- Returns:
- A boolean, true if the address is not reserved per
- iana-ipv6-special-registry.
-
- """
- return not self.is_private
-
- @property
- def is_unspecified(self):
- """Test if the address is unspecified.
-
- Returns:
- A boolean, True if this is the unspecified address as defined in
- RFC 2373 2.5.2.
-
- """
- return self._ip == 0
-
- @property
- def is_loopback(self):
- """Test if the address is a loopback address.
-
- Returns:
- A boolean, True if the address is a loopback address as defined in
- RFC 2373 2.5.3.
-
- """
- return self._ip == 1
-
- @property
- def ipv4_mapped(self):
- """Return the IPv4 mapped address.
-
- Returns:
- If the IPv6 address is a v4 mapped address, return the
- IPv4 mapped address. Return None otherwise.
-
- """
- if (self._ip >> 32) != 0xFFFF:
- return None
- return IPv4Address(self._ip & 0xFFFFFFFF)
-
- @property
- def teredo(self):
- """Tuple of embedded teredo IPs.
-
- Returns:
- Tuple of the (server, client) IPs or None if the address
- doesn't appear to be a teredo address (doesn't start with
- 2001::/32)
-
- """
- if (self._ip >> 96) != 0x20010000:
- return None
- return (IPv4Address((self._ip >> 64) & 0xFFFFFFFF),
- IPv4Address(~self._ip & 0xFFFFFFFF))
-
- @property
- def sixtofour(self):
- """Return the IPv4 6to4 embedded address.
-
- Returns:
- The IPv4 6to4-embedded address if present or None if the
- address doesn't appear to contain a 6to4 embedded address.
-
- """
- if (self._ip >> 112) != 0x2002:
- return None
- return IPv4Address((self._ip >> 80) & 0xFFFFFFFF)
-
-
-class IPv6Interface(IPv6Address):
-
- def __init__(self, address):
- if isinstance(address, (bytes, _compat_int_types)):
- IPv6Address.__init__(self, address)
- self.network = IPv6Network(self._ip)
- self._prefixlen = self._max_prefixlen
- return
- if isinstance(address, tuple):
- IPv6Address.__init__(self, address[0])
- if len(address) > 1:
- self._prefixlen = int(address[1])
- else:
- self._prefixlen = self._max_prefixlen
- self.network = IPv6Network(address, strict=False)
- self.netmask = self.network.netmask
- self.hostmask = self.network.hostmask
- return
-
- addr = _split_optional_netmask(address)
- IPv6Address.__init__(self, addr[0])
- self.network = IPv6Network(address, strict=False)
- self.netmask = self.network.netmask
- self._prefixlen = self.network._prefixlen
- self.hostmask = self.network.hostmask
-
- def __str__(self):
- return '%s/%d' % (self._string_from_ip_int(self._ip),
- self.network.prefixlen)
-
- def __eq__(self, other):
- address_equal = IPv6Address.__eq__(self, other)
- if not address_equal or address_equal is NotImplemented:
- return address_equal
- try:
- return self.network == other.network
- except AttributeError:
- # An interface with an associated network is NOT the
- # same as an unassociated address. That's why the hash
- # takes the extra info into account.
- return False
-
- def __lt__(self, other):
- address_less = IPv6Address.__lt__(self, other)
- if address_less is NotImplemented:
- return NotImplemented
- try:
- return (self.network < other.network or
- self.network == other.network and address_less)
- except AttributeError:
- # We *do* allow addresses and interfaces to be sorted. The
- # unassociated address is considered less than all interfaces.
- return False
-
- def __hash__(self):
- return self._ip ^ self._prefixlen ^ int(self.network.network_address)
-
- __reduce__ = _IPAddressBase.__reduce__
-
- @property
- def ip(self):
- return IPv6Address(self._ip)
-
- @property
- def with_prefixlen(self):
- return '%s/%s' % (self._string_from_ip_int(self._ip),
- self._prefixlen)
-
- @property
- def with_netmask(self):
- return '%s/%s' % (self._string_from_ip_int(self._ip),
- self.netmask)
-
- @property
- def with_hostmask(self):
- return '%s/%s' % (self._string_from_ip_int(self._ip),
- self.hostmask)
-
- @property
- def is_unspecified(self):
- return self._ip == 0 and self.network.is_unspecified
-
- @property
- def is_loopback(self):
- return self._ip == 1 and self.network.is_loopback
-
-
-class IPv6Network(_BaseV6, _BaseNetwork):
-
- """This class represents and manipulates 128-bit IPv6 networks.
-
- Attributes: [examples for IPv6('2001:db8::1000/124')]
- .network_address: IPv6Address('2001:db8::1000')
- .hostmask: IPv6Address('::f')
- .broadcast_address: IPv6Address('2001:db8::100f')
- .netmask: IPv6Address('ffff:ffff:ffff:ffff:ffff:ffff:ffff:fff0')
- .prefixlen: 124
-
- """
-
- # Class to use when creating address objects
- _address_class = IPv6Address
-
- def __init__(self, address, strict=True):
- """Instantiate a new IPv6 Network object.
-
- Args:
- address: A string or integer representing the IPv6 network or the
- IP and prefix/netmask.
- '2001:db8::/128'
- '2001:db8:0000:0000:0000:0000:0000:0000/128'
- '2001:db8::'
- are all functionally the same in IPv6. That is to say,
- failing to provide a subnetmask will create an object with
- a mask of /128.
-
- Additionally, an integer can be passed, so
- IPv6Network('2001:db8::') ==
- IPv6Network(42540766411282592856903984951653826560)
- or, more generally
- IPv6Network(int(IPv6Network('2001:db8::'))) ==
- IPv6Network('2001:db8::')
-
- strict: A boolean. If true, ensure that we have been passed
- A true network address, eg, 2001:db8::1000/124 and not an
- IP address on a network, eg, 2001:db8::1/124.
-
- Raises:
- AddressValueError: If address isn't a valid IPv6 address.
- NetmaskValueError: If the netmask isn't valid for
- an IPv6 address.
- ValueError: If strict was True and a network address was not
- supplied.
-
- """
- _BaseNetwork.__init__(self, address)
-
- # Efficient constructor from integer or packed address
- if isinstance(address, (bytes, _compat_int_types)):
- self.network_address = IPv6Address(address)
- self.netmask, self._prefixlen = self._make_netmask(
- self._max_prefixlen)
- return
-
- if isinstance(address, tuple):
- if len(address) > 1:
- arg = address[1]
- else:
- arg = self._max_prefixlen
- self.netmask, self._prefixlen = self._make_netmask(arg)
- self.network_address = IPv6Address(address[0])
- packed = int(self.network_address)
- if packed & int(self.netmask) != packed:
- if strict:
- raise ValueError('%s has host bits set' % self)
- else:
- self.network_address = IPv6Address(packed &
- int(self.netmask))
- return
-
- # Assume input argument to be string or any object representation
- # which converts into a formatted IP prefix string.
- addr = _split_optional_netmask(address)
-
- self.network_address = IPv6Address(self._ip_int_from_string(addr[0]))
-
- if len(addr) == 2:
- arg = addr[1]
- else:
- arg = self._max_prefixlen
- self.netmask, self._prefixlen = self._make_netmask(arg)
-
- if strict:
- if (IPv6Address(int(self.network_address) & int(self.netmask)) !=
- self.network_address):
- raise ValueError('%s has host bits set' % self)
- self.network_address = IPv6Address(int(self.network_address) &
- int(self.netmask))
-
- if self._prefixlen == (self._max_prefixlen - 1):
- self.hosts = self.__iter__
-
- def hosts(self):
- """Generate Iterator over usable hosts in a network.
-
- This is like __iter__ except it doesn't return the
- Subnet-Router anycast address.
-
- """
- network = int(self.network_address)
- broadcast = int(self.broadcast_address)
- for x in _compat_range(network + 1, broadcast + 1):
- yield self._address_class(x)
-
- @property
- def is_site_local(self):
- """Test if the address is reserved for site-local.
-
- Note that the site-local address space has been deprecated by RFC 3879.
- Use is_private to test if this address is in the space of unique local
- addresses as defined by RFC 4193.
-
- Returns:
- A boolean, True if the address is reserved per RFC 3513 2.5.6.
-
- """
- return (self.network_address.is_site_local and
- self.broadcast_address.is_site_local)
-
-
-class _IPv6Constants(object):
-
- _linklocal_network = IPv6Network('fe80::/10')
-
- _multicast_network = IPv6Network('ff00::/8')
-
- _private_networks = [
- IPv6Network('::1/128'),
- IPv6Network('::/128'),
- IPv6Network('::ffff:0:0/96'),
- IPv6Network('100::/64'),
- IPv6Network('2001::/23'),
- IPv6Network('2001:2::/48'),
- IPv6Network('2001:db8::/32'),
- IPv6Network('2001:10::/28'),
- IPv6Network('fc00::/7'),
- IPv6Network('fe80::/10'),
- ]
-
- _reserved_networks = [
- IPv6Network('::/8'), IPv6Network('100::/8'),
- IPv6Network('200::/7'), IPv6Network('400::/6'),
- IPv6Network('800::/5'), IPv6Network('1000::/4'),
- IPv6Network('4000::/3'), IPv6Network('6000::/3'),
- IPv6Network('8000::/3'), IPv6Network('A000::/3'),
- IPv6Network('C000::/3'), IPv6Network('E000::/4'),
- IPv6Network('F000::/5'), IPv6Network('F800::/6'),
- IPv6Network('FE00::/9'),
- ]
-
- _sitelocal_network = IPv6Network('fec0::/10')
-
-
-IPv6Address._constants = _IPv6Constants
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/core.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/core.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/core.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/core.py 2021-11-12 18:13:53.000000000 +0000
@@ -121,9 +121,17 @@
self._module = AnsibleAWSModule.default_settings["module_class"](**kwargs)
- if local_settings["check_boto3"] and not HAS_BOTO3:
- self._module.fail_json(
- msg=missing_required_lib('botocore or boto3'))
+ if local_settings["check_boto3"]:
+ if not HAS_BOTO3:
+ self._module.fail_json(
+ msg=missing_required_lib('botocore or boto3'))
+ current_versions = self._gather_versions()
+ if not self.botocore_at_least('1.18.0'):
+ self.warn('botocore < 1.18.0 is not supported or tested.'
+ ' Some features may not work.')
+ if not self.boto3_at_least("1.15.0"):
+ self.warn('boto3 < 1.15.0 is not supported or tested.'
+ ' Some features may not work.')
self.check_mode = self._module.check_mode
self._diff = self._module._diff
@@ -190,8 +198,8 @@
region=region, endpoint=ec2_url, **aws_connect_kwargs)
@property
- def region(self, boto3=True):
- return get_aws_region(self, boto3)
+ def region(self):
+ return get_aws_region(self, True)
def fail_json_aws(self, exception, msg=None, **kwargs):
"""call fail_json with processed exception
@@ -244,6 +252,25 @@
return dict(boto3_version=boto3.__version__,
botocore_version=botocore.__version__)
+ def require_boto3_at_least(self, desired, **kwargs):
+ """Check if the available boto3 version is greater than or equal to a desired version.
+
+ calls fail_json() when the boto3 version is less than the desired
+ version
+
+ Usage:
+ module.require_boto3_at_least("1.2.3", reason="to update tags")
+ module.require_boto3_at_least("1.1.1")
+
+ :param desired the minimum desired version
+ :param reason why the version is required (optional)
+ """
+ if not self.boto3_at_least(desired):
+ self._module.fail_json(
+ msg=missing_required_lib('boto3>={0}'.format(desired), **kwargs),
+ **self._gather_versions()
+ )
+
def boto3_at_least(self, desired):
"""Check if the available boto3 version is greater than or equal to a desired version.
@@ -255,6 +282,25 @@
existing = self._gather_versions()
return LooseVersion(existing['boto3_version']) >= LooseVersion(desired)
+ def require_botocore_at_least(self, desired, **kwargs):
+ """Check if the available botocore version is greater than or equal to a desired version.
+
+ calls fail_json() when the botocore version is less than the desired
+ version
+
+ Usage:
+ module.require_botocore_at_least("1.2.3", reason="to update tags")
+ module.require_botocore_at_least("1.1.1")
+
+ :param desired the minimum desired version
+ :param reason why the version is required (optional)
+ """
+ if not self.botocore_at_least(desired):
+ self._module.fail_json(
+ msg=missing_required_lib('botocore>={0}'.format(desired), **kwargs),
+ **self._gather_versions()
+ )
+
def botocore_at_least(self, desired):
"""Check if the available botocore version is greater than or equal to a desired version.
@@ -360,7 +406,7 @@
return parameters
-def scrub_none_parameters(parameters, descend_into_lists=False):
+def scrub_none_parameters(parameters, descend_into_lists=True):
"""
Iterate over a dictionary removing any keys that have a None value
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/ec2.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/ec2.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/ec2.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/ec2.py 2021-11-12 18:13:53.000000000 +0000
@@ -35,7 +35,6 @@
import traceback
from ansible.module_utils._text import to_native
-from ansible.module_utils._text import to_text
from ansible.module_utils.ansible_release import __version__
from ansible.module_utils.basic import env_fallback
from ansible.module_utils.basic import missing_required_lib
@@ -50,6 +49,15 @@
from ansible.module_utils.common.dict_transformations import snake_dict_to_camel_dict # pylint: disable=unused-import
from .cloud import CloudRetry
+# Used to live here, moved into ansible_collections.amazon.aws.plugins.module_utils.tagging
+from .tagging import ansible_dict_to_boto3_tag_list
+from .tagging import boto3_tag_list_to_ansible_dict
+from .tagging import compare_aws_tags
+
+# Used to live here, moved into # ansible_collections.amazon.aws.plugins.module_utils.policy
+from .policy import _py3cmp as py3cmp # pylint: disable=unused-import
+from .policy import compare_policies # pylint: disable=unused-import
+from .policy import sort_json_policy_dict # pylint: disable=unused-import
BOTO_IMP_ERR = None
try:
@@ -69,14 +77,6 @@
BOTO3_IMP_ERR = traceback.format_exc()
HAS_BOTO3 = False
-try:
- # Although this is to allow Python 3 the ability to use the custom comparison as a key, Python 2.7 also
- # uses this (and it works as expected). Python 2.6 will trigger the ImportError.
- from functools import cmp_to_key
- PY3_COMPARISON = True
-except ImportError:
- PY3_COMPARISON = False
-
class AnsibleAWSError(Exception):
pass
@@ -462,73 +462,6 @@
return filters_list
-def boto3_tag_list_to_ansible_dict(tags_list, tag_name_key_name=None, tag_value_key_name=None):
-
- """ Convert a boto3 list of resource tags to a flat dict of key:value pairs
- Args:
- tags_list (list): List of dicts representing AWS tags.
- tag_name_key_name (str): Value to use as the key for all tag keys (useful because boto3 doesn't always use "Key")
- tag_value_key_name (str): Value to use as the key for all tag values (useful because boto3 doesn't always use "Value")
- Basic Usage:
- >>> tags_list = [{'Key': 'MyTagKey', 'Value': 'MyTagValue'}]
- >>> boto3_tag_list_to_ansible_dict(tags_list)
- [
- {
- 'Key': 'MyTagKey',
- 'Value': 'MyTagValue'
- }
- ]
- Returns:
- Dict: Dict of key:value pairs representing AWS tags
- {
- 'MyTagKey': 'MyTagValue',
- }
- """
-
- if tag_name_key_name and tag_value_key_name:
- tag_candidates = {tag_name_key_name: tag_value_key_name}
- else:
- tag_candidates = {'key': 'value', 'Key': 'Value'}
-
- # minio seems to return [{}] as an empty tags_list
- if not tags_list or not any(tag for tag in tags_list):
- return {}
- for k, v in tag_candidates.items():
- if k in tags_list[0] and v in tags_list[0]:
- return dict((tag[k], tag[v]) for tag in tags_list)
- raise ValueError("Couldn't find tag key (candidates %s) in tag list %s" % (str(tag_candidates), str(tags_list)))
-
-
-def ansible_dict_to_boto3_tag_list(tags_dict, tag_name_key_name='Key', tag_value_key_name='Value'):
-
- """ Convert a flat dict of key:value pairs representing AWS resource tags to a boto3 list of dicts
- Args:
- tags_dict (dict): Dict representing AWS resource tags.
- tag_name_key_name (str): Value to use as the key for all tag keys (useful because boto3 doesn't always use "Key")
- tag_value_key_name (str): Value to use as the key for all tag values (useful because boto3 doesn't always use "Value")
- Basic Usage:
- >>> tags_dict = {'MyTagKey': 'MyTagValue'}
- >>> ansible_dict_to_boto3_tag_list(tags_dict)
- {
- 'MyTagKey': 'MyTagValue'
- }
- Returns:
- List: List of dicts containing tag keys and values
- [
- {
- 'Key': 'MyTagKey',
- 'Value': 'MyTagValue'
- }
- ]
- """
-
- tags_list = []
- for k, v in tags_dict.items():
- tags_list.append({tag_name_key_name: k, tag_value_key_name: to_native(v)})
-
- return tags_list
-
-
def get_ec2_security_group_ids_from_names(sec_group_list, ec2_connection, vpc_id=None, boto3=True):
""" Return list of security group IDs from security group names. Note that security group names are not unique
@@ -592,152 +525,6 @@
return sec_group_id_list
-def _hashable_policy(policy, policy_list):
- """
- Takes a policy and returns a list, the contents of which are all hashable and sorted.
- Example input policy:
- {'Version': '2012-10-17',
- 'Statement': [{'Action': 's3:PutObjectAcl',
- 'Sid': 'AddCannedAcl2',
- 'Resource': 'arn:aws:s3:::test_policy/*',
- 'Effect': 'Allow',
- 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
- }]}
- Returned value:
- [('Statement', ((('Action', (u's3:PutObjectAcl',)),
- ('Effect', (u'Allow',)),
- ('Principal', ('AWS', ((u'arn:aws:iam::XXXXXXXXXXXX:user/username1',), (u'arn:aws:iam::XXXXXXXXXXXX:user/username2',)))),
- ('Resource', (u'arn:aws:s3:::test_policy/*',)), ('Sid', (u'AddCannedAcl2',)))),
- ('Version', (u'2012-10-17',)))]
-
- """
- # Amazon will automatically convert bool and int to strings for us
- if isinstance(policy, bool):
- return tuple([str(policy).lower()])
- elif isinstance(policy, int):
- return tuple([str(policy)])
-
- if isinstance(policy, list):
- for each in policy:
- tupleified = _hashable_policy(each, [])
- if isinstance(tupleified, list):
- tupleified = tuple(tupleified)
- policy_list.append(tupleified)
- elif isinstance(policy, string_types) or isinstance(policy, binary_type):
- policy = to_text(policy)
- # convert root account ARNs to just account IDs
- if policy.startswith('arn:aws:iam::') and policy.endswith(':root'):
- policy = policy.split(':')[4]
- return [policy]
- elif isinstance(policy, dict):
- sorted_keys = list(policy.keys())
- sorted_keys.sort()
- for key in sorted_keys:
- element = policy[key]
- # Special case defined in
- # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
- if key in ["NotPrincipal", "Principal"] and policy[key] == "*":
- element = {"AWS": "*"}
- tupleified = _hashable_policy(element, [])
- if isinstance(tupleified, list):
- tupleified = tuple(tupleified)
- policy_list.append((key, tupleified))
-
- # ensure we aren't returning deeply nested structures of length 1
- if len(policy_list) == 1 and isinstance(policy_list[0], tuple):
- policy_list = policy_list[0]
- if isinstance(policy_list, list):
- if PY3_COMPARISON:
- policy_list.sort(key=cmp_to_key(py3cmp))
- else:
- policy_list.sort()
- return policy_list
-
-
-def py3cmp(a, b):
- """ Python 2 can sort lists of mixed types. Strings < tuples. Without this function this fails on Python 3."""
- try:
- if a > b:
- return 1
- elif a < b:
- return -1
- else:
- return 0
- except TypeError as e:
- # check to see if they're tuple-string
- # always say strings are less than tuples (to maintain compatibility with python2)
- str_ind = to_text(e).find('str')
- tup_ind = to_text(e).find('tuple')
- if -1 not in (str_ind, tup_ind):
- if str_ind < tup_ind:
- return -1
- elif tup_ind < str_ind:
- return 1
- raise
-
-
-def compare_policies(current_policy, new_policy, default_version="2008-10-17"):
- """ Compares the existing policy and the updated policy
- Returns True if there is a difference between policies.
- """
- # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_version.html
- if default_version:
- if isinstance(current_policy, dict):
- current_policy = current_policy.copy()
- current_policy.setdefault("Version", default_version)
- if isinstance(new_policy, dict):
- new_policy = new_policy.copy()
- new_policy.setdefault("Version", default_version)
-
- return set(_hashable_policy(new_policy, [])) != set(_hashable_policy(current_policy, []))
-
-
-def sort_json_policy_dict(policy_dict):
-
- """ Sort any lists in an IAM JSON policy so that comparison of two policies with identical values but
- different orders will return true
- Args:
- policy_dict (dict): Dict representing IAM JSON policy.
- Basic Usage:
- >>> my_iam_policy = {'Principle': {'AWS':["31","7","14","101"]}
- >>> sort_json_policy_dict(my_iam_policy)
- Returns:
- Dict: Will return a copy of the policy as a Dict but any List will be sorted
- {
- 'Principle': {
- 'AWS': [ '7', '14', '31', '101' ]
- }
- }
- """
-
- def value_is_list(my_list):
-
- checked_list = []
- for item in my_list:
- if isinstance(item, dict):
- checked_list.append(sort_json_policy_dict(item))
- elif isinstance(item, list):
- checked_list.append(value_is_list(item))
- else:
- checked_list.append(item)
-
- # Sort list. If it's a list of dictionaries, sort by tuple of key-value
- # pairs, since Python 3 doesn't allow comparisons such as `<` between dictionaries.
- checked_list.sort(key=lambda x: sorted(x.items()) if isinstance(x, dict) else x)
- return checked_list
-
- ordered_policy_dict = {}
- for key, value in policy_dict.items():
- if isinstance(value, dict):
- ordered_policy_dict[key] = sort_json_policy_dict(value)
- elif isinstance(value, list):
- ordered_policy_dict[key] = value_is_list(value)
- else:
- ordered_policy_dict[key] = value
-
- return ordered_policy_dict
-
-
def map_complex_type(complex_type, type_map):
"""
Allows to cast elements within a dictionary to a specific type
@@ -780,33 +567,6 @@
return new_type
-def compare_aws_tags(current_tags_dict, new_tags_dict, purge_tags=True):
- """
- Compare two dicts of AWS tags. Dicts are expected to of been created using 'boto3_tag_list_to_ansible_dict' helper function.
- Two dicts are returned - the first is tags to be set, the second is any tags to remove. Since the AWS APIs differ
- these may not be able to be used out of the box.
-
- :param current_tags_dict:
- :param new_tags_dict:
- :param purge_tags:
- :return: tag_key_value_pairs_to_set: a dict of key value pairs that need to be set in AWS. If all tags are identical this dict will be empty
- :return: tag_keys_to_unset: a list of key names (type str) that need to be unset in AWS. If no tags need to be unset this list will be empty
- """
-
- tag_key_value_pairs_to_set = {}
- tag_keys_to_unset = []
-
- for key in current_tags_dict.keys():
- if key not in new_tags_dict and purge_tags:
- tag_keys_to_unset.append(key)
-
- for key in set(new_tags_dict.keys()) - set(tag_keys_to_unset):
- if to_text(new_tags_dict[key]) != current_tags_dict.get(key):
- tag_key_value_pairs_to_set[key] = new_tags_dict[key]
-
- return tag_key_value_pairs_to_set, tag_keys_to_unset
-
-
@AWSRetry.jittered_backoff()
def _describe_ec2_tags(client, **params):
paginator = client.get_paginator('describe_tags')
@@ -939,3 +699,46 @@
changed |= add_ec2_tags(client, module, resource_id, tags_to_set, retry_codes)
return changed
+
+
+def normalize_ec2_vpc_dhcp_config(option_config):
+ """
+ The boto2 module returned a config dict, but boto3 returns a list of dicts
+ Make the data we return look like the old way, so we don't break users.
+ This is also much more user-friendly.
+ boto3:
+ 'DhcpConfigurations': [
+ {'Key': 'domain-name', 'Values': [{'Value': 'us-west-2.compute.internal'}]},
+ {'Key': 'domain-name-servers', 'Values': [{'Value': 'AmazonProvidedDNS'}]},
+ {'Key': 'netbios-name-servers', 'Values': [{'Value': '1.2.3.4'}, {'Value': '5.6.7.8'}]},
+ {'Key': 'netbios-node-type', 'Values': [1]},
+ {'Key': 'ntp-servers', 'Values': [{'Value': '1.2.3.4'}, {'Value': '5.6.7.8'}]}
+ ],
+ The module historically returned:
+ "new_options": {
+ "domain-name": "ec2.internal",
+ "domain-name-servers": ["AmazonProvidedDNS"],
+ "netbios-name-servers": ["10.0.0.1", "10.0.1.1"],
+ "netbios-node-type": "1",
+ "ntp-servers": ["10.0.0.2", "10.0.1.2"]
+ },
+ """
+ config_data = {}
+
+ if len(option_config) == 0:
+ # If there is no provided config, return the empty dictionary
+ return config_data
+
+ for config_item in option_config:
+ # Handle single value keys
+ if config_item['Key'] == 'netbios-node-type':
+ if isinstance(config_item['Values'], integer_types):
+ config_data['netbios-node-type'] = str((config_item['Values']))
+ elif isinstance(config_item['Values'], list):
+ config_data['netbios-node-type'] = str((config_item['Values'][0]['Value']))
+ # Handle actual lists of values
+ for option in ['domain-name', 'domain-name-servers', 'ntp-servers', 'netbios-name-servers']:
+ if config_item['Key'] == option:
+ config_data[option] = [val['Value'] for val in config_item['Values']]
+
+ return config_data
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/policy.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/policy.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/policy.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/policy.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,179 @@
+# This code is part of Ansible, but is an independent component.
+# This particular file snippet, and this file snippet only, is BSD licensed.
+# Modules you write using this snippet, which is embedded dynamically by Ansible
+# still belong to the author of the module, and may assign their own license
+# to the complete work.
+#
+# Copyright (c), Michael DeHaan , 2012-2013
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+from functools import cmp_to_key
+
+from ansible.module_utils._text import to_text
+from ansible.module_utils.six import binary_type
+from ansible.module_utils.six import string_types
+
+
+def _hashable_policy(policy, policy_list):
+ """
+ Takes a policy and returns a list, the contents of which are all hashable and sorted.
+ Example input policy:
+ {'Version': '2012-10-17',
+ 'Statement': [{'Action': 's3:PutObjectAcl',
+ 'Sid': 'AddCannedAcl2',
+ 'Resource': 'arn:aws:s3:::test_policy/*',
+ 'Effect': 'Allow',
+ 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
+ }]}
+ Returned value:
+ [('Statement', ((('Action', (u's3:PutObjectAcl',)),
+ ('Effect', (u'Allow',)),
+ ('Principal', ('AWS', ((u'arn:aws:iam::XXXXXXXXXXXX:user/username1',), (u'arn:aws:iam::XXXXXXXXXXXX:user/username2',)))),
+ ('Resource', (u'arn:aws:s3:::test_policy/*',)), ('Sid', (u'AddCannedAcl2',)))),
+ ('Version', (u'2012-10-17',)))]
+
+ """
+ # Amazon will automatically convert bool and int to strings for us
+ if isinstance(policy, bool):
+ return tuple([str(policy).lower()])
+ elif isinstance(policy, int):
+ return tuple([str(policy)])
+
+ if isinstance(policy, list):
+ for each in policy:
+ tupleified = _hashable_policy(each, [])
+ if isinstance(tupleified, list):
+ tupleified = tuple(tupleified)
+ policy_list.append(tupleified)
+ elif isinstance(policy, string_types) or isinstance(policy, binary_type):
+ policy = to_text(policy)
+ # convert root account ARNs to just account IDs
+ if policy.startswith('arn:aws:iam::') and policy.endswith(':root'):
+ policy = policy.split(':')[4]
+ return [policy]
+ elif isinstance(policy, dict):
+ sorted_keys = list(policy.keys())
+ sorted_keys.sort()
+ for key in sorted_keys:
+ element = policy[key]
+ # Special case defined in
+ # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
+ if key in ["NotPrincipal", "Principal"] and policy[key] == "*":
+ element = {"AWS": "*"}
+ tupleified = _hashable_policy(element, [])
+ if isinstance(tupleified, list):
+ tupleified = tuple(tupleified)
+ policy_list.append((key, tupleified))
+
+ # ensure we aren't returning deeply nested structures of length 1
+ if len(policy_list) == 1 and isinstance(policy_list[0], tuple):
+ policy_list = policy_list[0]
+ if isinstance(policy_list, list):
+ policy_list.sort(key=cmp_to_key(_py3cmp))
+ return policy_list
+
+
+def _py3cmp(a, b):
+ """ Python 2 can sort lists of mixed types. Strings < tuples. Without this function this fails on Python 3."""
+ try:
+ if a > b:
+ return 1
+ elif a < b:
+ return -1
+ else:
+ return 0
+ except TypeError as e:
+ # check to see if they're tuple-string
+ # always say strings are less than tuples (to maintain compatibility with python2)
+ str_ind = to_text(e).find('str')
+ tup_ind = to_text(e).find('tuple')
+ if -1 not in (str_ind, tup_ind):
+ if str_ind < tup_ind:
+ return -1
+ elif tup_ind < str_ind:
+ return 1
+ raise
+
+
+def compare_policies(current_policy, new_policy, default_version="2008-10-17"):
+ """ Compares the existing policy and the updated policy
+ Returns True if there is a difference between policies.
+ """
+ # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_version.html
+ if default_version:
+ if isinstance(current_policy, dict):
+ current_policy = current_policy.copy()
+ current_policy.setdefault("Version", default_version)
+ if isinstance(new_policy, dict):
+ new_policy = new_policy.copy()
+ new_policy.setdefault("Version", default_version)
+
+ return set(_hashable_policy(new_policy, [])) != set(_hashable_policy(current_policy, []))
+
+
+def sort_json_policy_dict(policy_dict):
+
+ """ Sort any lists in an IAM JSON policy so that comparison of two policies with identical values but
+ different orders will return true
+ Args:
+ policy_dict (dict): Dict representing IAM JSON policy.
+ Basic Usage:
+ >>> my_iam_policy = {'Principle': {'AWS':["31","7","14","101"]}
+ >>> sort_json_policy_dict(my_iam_policy)
+ Returns:
+ Dict: Will return a copy of the policy as a Dict but any List will be sorted
+ {
+ 'Principle': {
+ 'AWS': [ '7', '14', '31', '101' ]
+ }
+ }
+ """
+
+ def value_is_list(my_list):
+
+ checked_list = []
+ for item in my_list:
+ if isinstance(item, dict):
+ checked_list.append(sort_json_policy_dict(item))
+ elif isinstance(item, list):
+ checked_list.append(value_is_list(item))
+ else:
+ checked_list.append(item)
+
+ # Sort list. If it's a list of dictionaries, sort by tuple of key-value
+ # pairs, since Python 3 doesn't allow comparisons such as `<` between dictionaries.
+ checked_list.sort(key=lambda x: sorted(x.items()) if isinstance(x, dict) else x)
+ return checked_list
+
+ ordered_policy_dict = {}
+ for key, value in policy_dict.items():
+ if isinstance(value, dict):
+ ordered_policy_dict[key] = sort_json_policy_dict(value)
+ elif isinstance(value, list):
+ ordered_policy_dict[key] = value_is_list(value)
+ else:
+ ordered_policy_dict[key] = value
+
+ return ordered_policy_dict
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/s3.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/s3.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/s3.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/s3.py 2021-11-12 18:13:53.000000000 +0000
@@ -19,6 +19,9 @@
HAS_MD5 = False
+import string
+
+
def calculate_etag(module, filename, etag, s3, bucket, obj, version=None):
if not HAS_MD5:
return None
@@ -81,3 +84,19 @@
return '"{0}-{1}"'.format(digest_squared.hexdigest(), len(digests))
else: # Compute the MD5 sum normally
return '"{0}"'.format(md5(content).hexdigest())
+
+
+def validate_bucket_name(module, name):
+ # See: https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html
+ if len(name) < 4:
+ module.fail_json(msg='the S3 bucket name is too short')
+ if len(name) > 63:
+ module.fail_json(msg='the length of an S3 bucket cannot exceed 63 characters')
+
+ legal_characters = string.ascii_lowercase + ".-" + string.digits
+ illegal_characters = [c for c in name if c not in legal_characters]
+ if illegal_characters:
+ module.fail_json(msg='invalid character(s) found in the bucket name')
+ if name[-1] not in string.ascii_lowercase + string.digits:
+ module.fail_json(msg='bucket names must begin and end with a letter or number')
+ return True
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/tagging.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/tagging.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/tagging.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/tagging.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,175 @@
+# This code is part of Ansible, but is an independent component.
+# This particular file snippet, and this file snippet only, is BSD licensed.
+# Modules you write using this snippet, which is embedded dynamically by Ansible
+# still belong to the author of the module, and may assign their own license
+# to the complete work.
+#
+# Copyright (c), Michael DeHaan , 2012-2013
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+from ansible.module_utils._text import to_native
+from ansible.module_utils._text import to_text
+from ansible.module_utils.six import string_types
+
+
+def boto3_tag_list_to_ansible_dict(tags_list, tag_name_key_name=None, tag_value_key_name=None):
+
+ """ Convert a boto3 list of resource tags to a flat dict of key:value pairs
+ Args:
+ tags_list (list): List of dicts representing AWS tags.
+ tag_name_key_name (str): Value to use as the key for all tag keys (useful because boto3 doesn't always use "Key")
+ tag_value_key_name (str): Value to use as the key for all tag values (useful because boto3 doesn't always use "Value")
+ Basic Usage:
+ >>> tags_list = [{'Key': 'MyTagKey', 'Value': 'MyTagValue'}]
+ >>> boto3_tag_list_to_ansible_dict(tags_list)
+ [
+ {
+ 'Key': 'MyTagKey',
+ 'Value': 'MyTagValue'
+ }
+ ]
+ Returns:
+ Dict: Dict of key:value pairs representing AWS tags
+ {
+ 'MyTagKey': 'MyTagValue',
+ }
+ """
+
+ if tag_name_key_name and tag_value_key_name:
+ tag_candidates = {tag_name_key_name: tag_value_key_name}
+ else:
+ tag_candidates = {'key': 'value', 'Key': 'Value'}
+
+ # minio seems to return [{}] as an empty tags_list
+ if not tags_list or not any(tag for tag in tags_list):
+ return {}
+ for k, v in tag_candidates.items():
+ if k in tags_list[0] and v in tags_list[0]:
+ return dict((tag[k], tag[v]) for tag in tags_list)
+ raise ValueError("Couldn't find tag key (candidates %s) in tag list %s" % (str(tag_candidates), str(tags_list)))
+
+
+def ansible_dict_to_boto3_tag_list(tags_dict, tag_name_key_name='Key', tag_value_key_name='Value'):
+
+ """ Convert a flat dict of key:value pairs representing AWS resource tags to a boto3 list of dicts
+ Args:
+ tags_dict (dict): Dict representing AWS resource tags.
+ tag_name_key_name (str): Value to use as the key for all tag keys (useful because boto3 doesn't always use "Key")
+ tag_value_key_name (str): Value to use as the key for all tag values (useful because boto3 doesn't always use "Value")
+ Basic Usage:
+ >>> tags_dict = {'MyTagKey': 'MyTagValue'}
+ >>> ansible_dict_to_boto3_tag_list(tags_dict)
+ {
+ 'MyTagKey': 'MyTagValue'
+ }
+ Returns:
+ List: List of dicts containing tag keys and values
+ [
+ {
+ 'Key': 'MyTagKey',
+ 'Value': 'MyTagValue'
+ }
+ ]
+ """
+
+ if not tags_dict:
+ return []
+
+ tags_list = []
+ for k, v in tags_dict.items():
+ tags_list.append({tag_name_key_name: k, tag_value_key_name: to_native(v)})
+
+ return tags_list
+
+
+def boto3_tag_specifications(tags_dict, types=None):
+ """ Converts a list of resource types and a flat dictionary of key:value pairs representing AWS
+ resource tags to a TagSpecification object.
+
+ https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_TagSpecification.html
+
+ Args:
+ tags_dict (dict): Dict representing AWS resource tags.
+ types (list) A list of resource types to be tagged.
+ Basic Usage:
+ >>> tags_dict = {'MyTagKey': 'MyTagValue'}
+ >>> boto3_tag_specifications(tags_dict, ['instance'])
+ [
+ {
+ 'ResourceType': 'instance',
+ 'Tags': [
+ {
+ 'Key': 'MyTagKey',
+ 'Value': 'MyTagValue'
+ }
+ ]
+ }
+ ]
+ Returns:
+ List: List of dictionaries representing an AWS Tag Specification
+ """
+ if not tags_dict:
+ return None
+ specifications = list()
+ tag_list = ansible_dict_to_boto3_tag_list(tags_dict)
+
+ if not types:
+ specifications.append(dict(Tags=tag_list))
+ return specifications
+
+ if isinstance(types, string_types):
+ types = [types]
+
+ for type_name in types:
+ specifications.append(dict(ResourceType=type_name, Tags=tag_list))
+
+ return specifications
+
+
+def compare_aws_tags(current_tags_dict, new_tags_dict, purge_tags=True):
+ """
+ Compare two dicts of AWS tags. Dicts are expected to of been created using 'boto3_tag_list_to_ansible_dict' helper function.
+ Two dicts are returned - the first is tags to be set, the second is any tags to remove. Since the AWS APIs differ
+ these may not be able to be used out of the box.
+
+ :param current_tags_dict:
+ :param new_tags_dict:
+ :param purge_tags:
+ :return: tag_key_value_pairs_to_set: a dict of key value pairs that need to be set in AWS. If all tags are identical this dict will be empty
+ :return: tag_keys_to_unset: a list of key names (type str) that need to be unset in AWS. If no tags need to be unset this list will be empty
+ """
+
+ tag_key_value_pairs_to_set = {}
+ tag_keys_to_unset = []
+
+ for key in current_tags_dict.keys():
+ if key not in new_tags_dict and purge_tags:
+ tag_keys_to_unset.append(key)
+
+ for key in set(new_tags_dict.keys()) - set(tag_keys_to_unset):
+ if to_text(new_tags_dict[key]) != current_tags_dict.get(key):
+ tag_key_value_pairs_to_set[key] = new_tags_dict[key]
+
+ return tag_key_value_pairs_to_set, tag_keys_to_unset
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/waiters.py ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/waiters.py
--- ansible-4.10.0/ansible_collections/amazon/aws/plugins/module_utils/waiters.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/plugins/module_utils/waiters.py 2021-11-12 18:13:53.000000000 +0000
@@ -90,6 +90,30 @@
},
]
},
+ "NetworkInterfaceDeleted": {
+ "operation": "DescribeNetworkInterfaces",
+ "delay": 5,
+ "maxAttempts": 40,
+ "acceptors": [
+ {
+ "matcher": "path",
+ "expected": True,
+ "argument": "length(NetworkInterfaces[]) > `0`",
+ "state": "retry"
+ },
+ {
+ "matcher": "path",
+ "expected": True,
+ "argument": "length(NetworkInterfaces[]) == `0`",
+ "state": "success"
+ },
+ {
+ "expected": "InvalidNetworkInterfaceID.NotFound",
+ "matcher": "error",
+ "state": "success"
+ },
+ ]
+ },
"NetworkInterfaceDeleteOnTerminate": {
"operation": "DescribeNetworkInterfaces",
"delay": 5,
@@ -162,6 +186,19 @@
},
]
},
+ "SnapshotCompleted": {
+ "delay": 15,
+ "operation": "DescribeSnapshots",
+ "maxAttempts": 40,
+ "acceptors": [
+ {
+ "expected": "completed",
+ "matcher": "pathAll",
+ "state": "success",
+ "argument": "Snapshots[].State"
+ }
+ ]
+ },
"SubnetAvailable": {
"delay": 15,
"operation": "DescribeSubnets",
@@ -449,6 +486,98 @@
}
+elb_data = {
+ "version": 2,
+ "waiters": {
+ "AnyInstanceInService": {
+ "acceptors": [
+ {
+ "argument": "InstanceStates[].State",
+ "expected": "InService",
+ "matcher": "pathAny",
+ "state": "success"
+ }
+ ],
+ "delay": 15,
+ "maxAttempts": 40,
+ "operation": "DescribeInstanceHealth"
+ },
+ "InstanceDeregistered": {
+ "delay": 15,
+ "operation": "DescribeInstanceHealth",
+ "maxAttempts": 40,
+ "acceptors": [
+ {
+ "expected": "OutOfService",
+ "matcher": "pathAll",
+ "state": "success",
+ "argument": "InstanceStates[].State"
+ },
+ {
+ "matcher": "error",
+ "expected": "InvalidInstance",
+ "state": "success"
+ }
+ ]
+ },
+ "InstanceInService": {
+ "acceptors": [
+ {
+ "argument": "InstanceStates[].State",
+ "expected": "InService",
+ "matcher": "pathAll",
+ "state": "success"
+ },
+ {
+ "matcher": "error",
+ "expected": "InvalidInstance",
+ "state": "retry"
+ }
+ ],
+ "delay": 15,
+ "maxAttempts": 40,
+ "operation": "DescribeInstanceHealth"
+ },
+ "LoadBalancerCreated": {
+ "delay": 10,
+ "maxAttempts": 60,
+ "operation": "DescribeLoadBalancers",
+ "acceptors": [
+ {
+ "matcher": "path",
+ "expected": True,
+ "argument": "length(LoadBalancerDescriptions[]) > `0`",
+ "state": "success",
+ },
+ {
+ "matcher": "error",
+ "expected": "LoadBalancerNotFound",
+ "state": "retry",
+ },
+ ],
+ },
+ "LoadBalancerDeleted": {
+ "delay": 10,
+ "maxAttempts": 60,
+ "operation": "DescribeLoadBalancers",
+ "acceptors": [
+ {
+ "matcher": "path",
+ "expected": True,
+ "argument": "length(LoadBalancerDescriptions[]) > `0`",
+ "state": "retry",
+ },
+ {
+ "matcher": "error",
+ "expected": "LoadBalancerNotFound",
+ "state": "success",
+ },
+ ],
+ },
+ }
+}
+
+
rds_data = {
"version": 2,
"waiters": {
@@ -464,6 +593,62 @@
"expected": "stopped"
},
]
+ },
+ "DBClusterAvailable": {
+ "delay": 20,
+ "maxAttempts": 60,
+ "operation": "DescribeDBClusters",
+ "acceptors": [
+ {
+ "state": "success",
+ "matcher": "pathAll",
+ "argument": "DBClusters[].Status",
+ "expected": "available"
+ },
+ {
+ "state": "retry",
+ "matcher": "error",
+ "expected": "DBClusterNotFoundFault"
+ }
+ ]
+ },
+ "DBClusterDeleted": {
+ "delay": 20,
+ "maxAttempts": 60,
+ "operation": "DescribeDBClusters",
+ "acceptors": [
+ {
+ "state": "success",
+ "matcher": "pathAll",
+ "argument": "DBClusters[].Status",
+ "expected": "stopped"
+ },
+ {
+ "state": "success",
+ "matcher": "error",
+ "expected": "DBClusterNotFoundFault"
+ }
+ ]
+ },
+ }
+}
+
+
+route53_data = {
+ "version": 2,
+ "waiters": {
+ "ResourceRecordSetsChanged": {
+ "delay": 30,
+ "maxAttempts": 60,
+ "operation": "GetChange",
+ "acceptors": [
+ {
+ "matcher": "path",
+ "expected": "INSYNC",
+ "argument": "ChangeInfo.Status",
+ "state": "success"
+ }
+ ]
}
}
}
@@ -503,11 +688,21 @@
return eks_models.get_waiter(name)
+def elb_model(name):
+ elb_models = core_waiter.WaiterModel(waiter_config=_inject_limit_retries(elb_data))
+ return elb_models.get_waiter(name)
+
+
def rds_model(name):
rds_models = core_waiter.WaiterModel(waiter_config=_inject_limit_retries(rds_data))
return rds_models.get_waiter(name)
+def route53_model(name):
+ route53_models = core_waiter.WaiterModel(waiter_config=_inject_limit_retries(route53_data))
+ return route53_models.get_waiter(name)
+
+
waiters_by_name = {
('EC2', 'image_available'): lambda ec2: core_waiter.Waiter(
'image_available',
@@ -527,6 +722,12 @@
core_waiter.NormalizedOperationMethod(
ec2.describe_network_interfaces
)),
+ ('EC2', 'network_interface_deleted'): lambda ec2: core_waiter.Waiter(
+ 'network_interface_deleted',
+ ec2_model('NetworkInterfaceDeleted'),
+ core_waiter.NormalizedOperationMethod(
+ ec2.describe_network_interfaces
+ )),
('EC2', 'network_interface_available'): lambda ec2: core_waiter.Waiter(
'network_interface_available',
ec2_model('NetworkInterfaceAvailable'),
@@ -557,6 +758,12 @@
core_waiter.NormalizedOperationMethod(
ec2.describe_security_groups
)),
+ ('EC2', 'snapshot_completed'): lambda ec2: core_waiter.Waiter(
+ 'snapshot_completed',
+ ec2_model('SnapshotCompleted'),
+ core_waiter.NormalizedOperationMethod(
+ ec2.describe_snapshots
+ )),
('EC2', 'subnet_available'): lambda ec2: core_waiter.Waiter(
'subnet_available',
ec2_model('SubnetAvailable'),
@@ -665,12 +872,60 @@
core_waiter.NormalizedOperationMethod(
eks.describe_cluster
)),
+ ('ElasticLoadBalancing', 'any_instance_in_service'): lambda elb: core_waiter.Waiter(
+ 'any_instance_in_service',
+ elb_model('AnyInstanceInService'),
+ core_waiter.NormalizedOperationMethod(
+ elb.describe_instance_health
+ )),
+ ('ElasticLoadBalancing', 'instance_deregistered'): lambda elb: core_waiter.Waiter(
+ 'instance_deregistered',
+ elb_model('InstanceDeregistered'),
+ core_waiter.NormalizedOperationMethod(
+ elb.describe_instance_health
+ )),
+ ('ElasticLoadBalancing', 'instance_in_service'): lambda elb: core_waiter.Waiter(
+ 'load_balancer_created',
+ elb_model('InstanceInService'),
+ core_waiter.NormalizedOperationMethod(
+ elb.describe_instance_health
+ )),
+ ('ElasticLoadBalancing', 'load_balancer_created'): lambda elb: core_waiter.Waiter(
+ 'load_balancer_created',
+ elb_model('LoadBalancerCreated'),
+ core_waiter.NormalizedOperationMethod(
+ elb.describe_load_balancers
+ )),
+ ('ElasticLoadBalancing', 'load_balancer_deleted'): lambda elb: core_waiter.Waiter(
+ 'load_balancer_deleted',
+ elb_model('LoadBalancerDeleted'),
+ core_waiter.NormalizedOperationMethod(
+ elb.describe_load_balancers
+ )),
('RDS', 'db_instance_stopped'): lambda rds: core_waiter.Waiter(
'db_instance_stopped',
rds_model('DBInstanceStopped'),
core_waiter.NormalizedOperationMethod(
rds.describe_db_instances
)),
+ ('RDS', 'cluster_available'): lambda rds: core_waiter.Waiter(
+ 'cluster_available',
+ rds_model('DBClusterAvailable'),
+ core_waiter.NormalizedOperationMethod(
+ rds.describe_db_clusters
+ )),
+ ('RDS', 'cluster_deleted'): lambda rds: core_waiter.Waiter(
+ 'cluster_deleted',
+ rds_model('DBClusterDeleted'),
+ core_waiter.NormalizedOperationMethod(
+ rds.describe_db_clusters
+ )),
+ ('Route53', 'resource_record_sets_changed'): lambda route53: core_waiter.Waiter(
+ 'resource_record_sets_changed',
+ route53_model('ResourceRecordSetsChanged'),
+ core_waiter.NormalizedOperationMethod(
+ route53.get_change
+ )),
}
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/README.md ansible-5.2.0/ansible_collections/amazon/aws/README.md
--- ansible-4.10.0/ansible_collections/amazon/aws/README.md 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/README.md 2021-11-12 18:13:53.000000000 +0000
@@ -1,7 +1,4 @@
# Amazon AWS Collection
-[![Shippable build status](https://api.shippable.com/projects/5e4451b6aa9a61000733064c/badge?branch=main)](https://api.shippable.com/projects/5e4451b6aa9a61000733064c/badge?branch=main)
-[![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/amazon.aws)](https://codecov.io/gh/ansible-collections/amazon.aws)
-
The Ansible Amazon AWS collection includes a variety of Ansible content to help automate the management of AWS instances. This collection is maintained by the Ansible cloud team.
AWS related modules and plugins supported by the Ansible community are in the [community.aws](https://github.com/ansible-collections/community.aws/) collection.
@@ -18,7 +15,11 @@
## Python version compatibility
-This collection depends on the AWS SDK for Python (Boto3 and Botocore). As AWS has [ceased supporting Python 2.6](https://aws.amazon.com/blogs/developer/deprecation-of-python-2-6-and-python-3-3-in-botocore-boto3-and-the-aws-cli/), this collection requires Python 2.7 or greater.
+As the AWS SDK for Python (Boto3 and Botocore) has [ceased supporting Python 2.7](https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-python-2-7-in-aws-sdk-for-python-and-aws-cli-v1/), this collection requires Python 3.6 or greater.
+
+Starting with the 2.0.0 releases of amazon.aws and community.aws, it is generally the collection's policy to support the versions of `botocore` and `boto3` that were released 12 months prior to the most recent major collection release, following semantic versioning (for example, 2.0.0, 3.0.0).
+
+Version 2.0.0 of this collection supports `boto3 >= 1.15.0` and `botocore >= 1.18.0`
## Included content
@@ -26,48 +27,61 @@
### Inventory plugins
Name | Description
--- | ---
-[amazon.aws.aws_ec2](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_ec2_inventory.rst)|EC2 inventory source
-[amazon.aws.aws_rds](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_rds_inventory.rst)|rds instance source
+[amazon.aws.aws_ec2](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_ec2_inventory.rst)|EC2 inventory source
+[amazon.aws.aws_rds](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_rds_inventory.rst)|rds instance source
### Lookup plugins
Name | Description
--- | ---
-[amazon.aws.aws_account_attribute](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_account_attribute_lookup.rst)|Look up AWS account attributes.
-[amazon.aws.aws_secret](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_secret_lookup.rst)|Look up secrets stored in AWS Secrets Manager.
-[amazon.aws.aws_service_ip_ranges](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_service_ip_ranges_lookup.rst)|Look up the IP ranges for services provided in AWS such as EC2 and S3.
-[amazon.aws.aws_ssm](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_ssm_lookup.rst)|Get the value for a SSM parameter or all parameters under a path.
+[amazon.aws.aws_account_attribute](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_account_attribute_lookup.rst)|Look up AWS account attributes.
+[amazon.aws.aws_secret](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_secret_lookup.rst)|Look up secrets stored in AWS Secrets Manager.
+[amazon.aws.aws_service_ip_ranges](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_service_ip_ranges_lookup.rst)|Look up the IP ranges for services provided in AWS such as EC2 and S3.
+[amazon.aws.aws_ssm](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_ssm_lookup.rst)|Get the value for a SSM parameter or all parameters under a path.
### Modules
Name | Description
--- | ---
-[amazon.aws.aws_az_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_az_info_module.rst)|Gather information about availability zones in AWS.
-[amazon.aws.aws_caller_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_caller_info_module.rst)|Get information about the user and account being used to make AWS calls.
-[amazon.aws.aws_s3](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.aws_s3_module.rst)|manage objects in S3.
-[amazon.aws.cloudformation](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.cloudformation_module.rst)|Create or delete an AWS CloudFormation stack
-[amazon.aws.cloudformation_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.cloudformation_info_module.rst)|Obtain information about an AWS CloudFormation stack
-[amazon.aws.ec2](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_module.rst)|create, terminate, start or stop an instance in ec2
-[amazon.aws.ec2_ami](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_ami_module.rst)|Create or destroy an image (AMI) in ec2
-[amazon.aws.ec2_ami_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_ami_info_module.rst)|Gather information about ec2 AMIs
-[amazon.aws.ec2_elb_lb](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_elb_lb_module.rst)|Creates, updates or destroys an Amazon ELB.
-[amazon.aws.ec2_eni](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_eni_module.rst)|Create and optionally attach an Elastic Network Interface (ENI) to an instance
-[amazon.aws.ec2_eni_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_eni_info_module.rst)|Gather information about ec2 ENI interfaces in AWS
-[amazon.aws.ec2_group](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_group_module.rst)|maintain an ec2 VPC security group.
-[amazon.aws.ec2_group_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_group_info_module.rst)|Gather information about ec2 security groups in AWS.
-[amazon.aws.ec2_key](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_key_module.rst)|create or delete an ec2 key pair
-[amazon.aws.ec2_metadata_facts](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_metadata_facts_module.rst)|gathers facts (instance metadata) about remote hosts within EC2
-[amazon.aws.ec2_snapshot](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_snapshot_module.rst)|Creates a snapshot from an existing volume
-[amazon.aws.ec2_snapshot_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_snapshot_info_module.rst)|Gather information about ec2 volume snapshots in AWS
-[amazon.aws.ec2_tag](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_tag_module.rst)|create and remove tags on ec2 resources
-[amazon.aws.ec2_tag_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_tag_info_module.rst)|list tags on ec2 resources
-[amazon.aws.ec2_vol](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vol_module.rst)|Create and attach a volume, return volume id and device map
-[amazon.aws.ec2_vol_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vol_info_module.rst)|Gather information about ec2 volumes in AWS
-[amazon.aws.ec2_vpc_dhcp_option](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vpc_dhcp_option_module.rst)|Manages DHCP Options, and can ensure the DHCP options for the given VPC match what's requested
-[amazon.aws.ec2_vpc_dhcp_option_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst)|Gather information about dhcp options sets in AWS
-[amazon.aws.ec2_vpc_net](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vpc_net_module.rst)|Configure AWS virtual private clouds
-[amazon.aws.ec2_vpc_net_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vpc_net_info_module.rst)|Gather information about ec2 VPCs in AWS
-[amazon.aws.ec2_vpc_subnet](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vpc_subnet_module.rst)|Manage subnets in AWS virtual private clouds
-[amazon.aws.ec2_vpc_subnet_info](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.ec2_vpc_subnet_info_module.rst)|Gather information about ec2 VPC subnets in AWS
-[amazon.aws.s3_bucket](https://github.com/ansible-collections/amazon.aws/blob/stable-1.5/docs/amazon.aws.s3_bucket_module.rst)|Manage S3 buckets in AWS, DigitalOcean, Ceph, Walrus, FakeS3 and StorageGRID
+[amazon.aws.aws_az_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_az_info_module.rst)|Gather information about availability zones in AWS.
+[amazon.aws.aws_caller_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_caller_info_module.rst)|Get information about the user and account being used to make AWS calls.
+[amazon.aws.aws_s3](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.aws_s3_module.rst)|manage objects in S3.
+[amazon.aws.cloudformation](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.cloudformation_module.rst)|Create or delete an AWS CloudFormation stack
+[amazon.aws.cloudformation_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.cloudformation_info_module.rst)|Obtain information about an AWS CloudFormation stack
+[amazon.aws.ec2](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_module.rst)|create, terminate, start or stop an instance in ec2
+[amazon.aws.ec2_ami](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_ami_module.rst)|Create or destroy an image (AMI) in ec2
+[amazon.aws.ec2_ami_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_ami_info_module.rst)|Gather information about ec2 AMIs
+[amazon.aws.ec2_eni](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_eni_module.rst)|Create and optionally attach an Elastic Network Interface (ENI) to an instance
+[amazon.aws.ec2_eni_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_eni_info_module.rst)|Gather information about ec2 ENI interfaces in AWS
+[amazon.aws.ec2_group](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_group_module.rst)|maintain an ec2 VPC security group.
+[amazon.aws.ec2_group_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_group_info_module.rst)|Gather information about ec2 security groups in AWS.
+[amazon.aws.ec2_instance](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_instance_module.rst)|Create & manage EC2 instances
+[amazon.aws.ec2_instance_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_instance_info_module.rst)|Gather information about ec2 instances in AWS
+[amazon.aws.ec2_key](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_key_module.rst)|create or delete an ec2 key pair
+[amazon.aws.ec2_metadata_facts](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_metadata_facts_module.rst)|gathers facts (instance metadata) about remote hosts within EC2
+[amazon.aws.ec2_snapshot](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_snapshot_module.rst)|Creates a snapshot from an existing volume
+[amazon.aws.ec2_snapshot_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_snapshot_info_module.rst)|Gather information about ec2 volume snapshots in AWS
+[amazon.aws.ec2_spot_instance](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_spot_instance_module.rst)|request, stop, reboot or cancel spot instance
+[amazon.aws.ec2_spot_instance_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_spot_instance_info_module.rst)|Gather information about ec2 spot instance requests
+[amazon.aws.ec2_tag](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_tag_module.rst)|create and remove tags on ec2 resources
+[amazon.aws.ec2_tag_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_tag_info_module.rst)|list tags on ec2 resources
+[amazon.aws.ec2_vol](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vol_module.rst)|Create and attach a volume, return volume id and device map
+[amazon.aws.ec2_vol_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vol_info_module.rst)|Gather information about ec2 volumes in AWS
+[amazon.aws.ec2_vpc_dhcp_option](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_dhcp_option_module.rst)|Manages DHCP Options, and can ensure the DHCP options for the given VPC match what's requested
+[amazon.aws.ec2_vpc_dhcp_option_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_dhcp_option_info_module.rst)|Gather information about dhcp options sets in AWS
+[amazon.aws.ec2_vpc_endpoint](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_endpoint_module.rst)|Create and delete AWS VPC Endpoints.
+[amazon.aws.ec2_vpc_endpoint_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_endpoint_info_module.rst)|Retrieves AWS VPC endpoints details using AWS methods.
+[amazon.aws.ec2_vpc_endpoint_service_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_endpoint_service_info_module.rst)|retrieves AWS VPC endpoint service details
+[amazon.aws.ec2_vpc_igw](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_igw_module.rst)|Manage an AWS VPC Internet gateway
+[amazon.aws.ec2_vpc_igw_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_igw_info_module.rst)|Gather information about internet gateways in AWS
+[amazon.aws.ec2_vpc_nat_gateway](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_nat_gateway_module.rst)|Manage AWS VPC NAT Gateways.
+[amazon.aws.ec2_vpc_nat_gateway_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_nat_gateway_info_module.rst)|Retrieves AWS VPC Managed Nat Gateway details using AWS methods.
+[amazon.aws.ec2_vpc_net](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_net_module.rst)|Configure AWS virtual private clouds
+[amazon.aws.ec2_vpc_net_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_net_info_module.rst)|Gather information about ec2 VPCs in AWS
+[amazon.aws.ec2_vpc_route_table](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_route_table_module.rst)|Manage route tables for AWS virtual private clouds
+[amazon.aws.ec2_vpc_route_table_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_route_table_info_module.rst)|Gather information about ec2 VPC route tables in AWS
+[amazon.aws.ec2_vpc_subnet](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_subnet_module.rst)|Manage subnets in AWS virtual private clouds
+[amazon.aws.ec2_vpc_subnet_info](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.ec2_vpc_subnet_info_module.rst)|Gather information about ec2 VPC subnets in AWS
+[amazon.aws.elb_classic_lb](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.elb_classic_lb_module.rst)|creates, updates or destroys an Amazon ELB.
+[amazon.aws.s3_bucket](https://github.com/ansible-collections/amazon.aws/blob/main/docs/amazon.aws.s3_bucket_module.rst)|Manage S3 buckets in AWS, DigitalOcean, Ceph, Walrus, FakeS3 and StorageGRID
@@ -129,7 +143,7 @@
You can also join us on:
-- Freenode IRC - ``#ansible-aws`` Freenode channel
+- IRC - the ``#ansible-aws`` [irc.libera.chat](https://libera.chat/) channel
### More information about contributing
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/requirements.txt ansible-5.2.0/ansible_collections/amazon/aws/requirements.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/requirements.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/requirements.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,8 @@
+# When updating the minimal requirements please also update
+# - tests/unit/constraints.txt
+# - tests/integration/constraints.txt
+# - tests/integration/targets/setup_botocore_pip
+botocore>=1.18.0
+boto3>=1.15.0
+# Final released version
boto>=2.49.0
-botocore>=1.12.249
-boto3>=1.9.249
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/shippable.yml ansible-5.2.0/ansible_collections/amazon/aws/shippable.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/shippable.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/shippable.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,26 +0,0 @@
-language: python
-
-env:
- matrix:
- - T=none
-
-matrix:
- include:
- - env: T=none
-branches:
- except:
- - "*-patch-*"
- - "revert-*-*"
-
-build:
- ci:
- - tests/utils/shippable/timing.sh tests/utils/shippable/shippable.sh $T
-
-integrations:
- notifications:
- - integrationName: email
- type: email
- on_success: never
- on_failure: never
- on_start: never
- on_pull_request: never
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/test-requirements.txt ansible-5.2.0/ansible_collections/amazon/aws/test-requirements.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/test-requirements.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/test-requirements.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,6 +1,16 @@
+botocore
+boto3
+boto
+
coverage==4.5.4
placebo
mock
pytest-xdist
# We should avoid these two modules with py3
pytest-mock
+# Needed for ansible.netcommon.ipaddr in tests
+netaddr
+# Sometimes needed where we don't have features we need in modules
+awscli
+# Used for comparing SSH Public keys to the Amazon fingerprints
+pycrypto
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/config.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/config.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/config.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/config.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+modules:
+ python_requires: '>=3.6'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/constraints.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/constraints.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/constraints.txt 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/constraints.txt 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,7 @@
+# Specifically run tests against the oldest versions that we support
+boto3==1.15.0
+botocore==1.18.0
+
+# AWS CLI has `botocore==` dependencies, provide the one that matches botocore
+# to avoid needing to download over a years worth of awscli wheels.
+awscli==1.18.141
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/requirements.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/requirements.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/requirements.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/requirements.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,12 @@
+# Our code is based on the AWS SDKs
+boto
+boto3
+botocore
+
# netaddr is needed for ansible.netcommon.ipv6
netaddr
virtualenv
+# Sometimes needed where we don't have features we need in modules
+awscli
+# Used for comparing SSH Public keys to the Amazon fingerprints
+pycrypto
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group3
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,26 +1,193 @@
-- set_fact:
- virtualenv: "{{ remote_tmp_dir }}/virtualenv"
- virtualenv_command: "{{ ansible_python_interpreter }} -m virtualenv"
-
-- set_fact:
- virtualenv_interpreter: "{{ virtualenv }}/bin/python"
-
-- pip:
- name: virtualenv
-
-- pip:
- name:
- - 'botocore>=1.13.0'
- - boto3
- - coverage<5
- virtualenv: "{{ virtualenv }}"
- virtualenv_command: "{{ virtualenv_command }}"
- virtualenv_site_packages: no
-
-- include_tasks: tests.yml
- vars:
- ansible_python_interpreter: "{{ virtualenv_interpreter }}"
-
-- file:
- path: "{{ virtualenv }}"
- state: absent
+---
+- module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key | default(omit) }}'
+ aws_secret_key: '{{ aws_secret_key | default(omit) }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region | default(omit) }}'
+
+ block:
+ - name: 'List available AZs in current Region'
+ aws_az_info:
+ register: region_azs
+
+ - name: check task return attributes
+ vars:
+ first_az: '{{ region_azs.availability_zones[0] }}'
+ assert:
+ that:
+ - region_azs is successful
+ - '"availability_zones" in region_azs'
+ - '"group_name" in first_az'
+ - '"messages" in first_az'
+ - '"network_border_group" in first_az'
+ - '"opt_in_status" in first_az'
+ - '"region_name" in first_az'
+ - '"state" in first_az'
+ - '"zone_id" in first_az'
+ - '"zone_name" in first_az'
+ # botocore >= 1.17.18
+ #- '"zone_type" in first_az'
+
+ - name: 'List available AZs in current Region - check_mode'
+ aws_az_info:
+ check_mode: yes
+ register: check_azs
+
+ - name: check task return attributes
+ vars:
+ first_az: '{{ check_azs.availability_zones[0] }}'
+ assert:
+ that:
+ - check_azs is successful
+ - '"availability_zones" in check_azs'
+ - '"group_name" in first_az'
+ - '"messages" in first_az'
+ - '"network_border_group" in first_az'
+ - '"opt_in_status" in first_az'
+ - '"region_name" in first_az'
+ - '"state" in first_az'
+ - '"zone_id" in first_az'
+ - '"zone_name" in first_az'
+ # botocore >= 1.17.18
+ #- '"zone_type" in first_az'
+
+
+ # Be specific - aws_region isn't guaranteed to be any specific value
+ - name: 'List Available AZs in us-east-1'
+ aws_az_info:
+ region: 'us-east-1'
+ register: us_east_1
+
+ - name: 'Check that an AZ from us-east-1 has valid looking attributes'
+ vars:
+ first_az: '{{ us_east_1.availability_zones[0] }}'
+ assert:
+ that:
+ - us_east_1 is successful
+ - '"availability_zones" in us_east_1'
+ - '"group_name" in first_az'
+ - '"messages" in first_az'
+ - '"network_border_group" in first_az'
+ - '"opt_in_status" in first_az'
+ - '"region_name" in first_az'
+ - '"state" in first_az'
+ - '"zone_id" in first_az'
+ - '"zone_name" in first_az'
+ # botocore >= 1.17.18
+ #- '"zone_type" in first_az'
+ - first_az.group_name.startswith('us-east-1')
+ - first_az.network_border_group.startswith('us-east-1')
+ - first_az.region_name == 'us-east-1'
+ - first_az.zone_id.startswith('use1-az')
+ - not first_az.zone_id == "use1-az"
+ - first_az.zone_name.startswith('us-east-1')
+ - not first_az.zone_name == 'us-east-1'
+ # botocore >= 1.17.18
+ #- first_az.zone_type == 'availability-zone'
+
+ - name: 'Filter Available AZs in us-west-2 using - ("zone-name")'
+ aws_az_info:
+ region: 'us-west-2'
+ filters:
+ zone-name: 'us-west-2c'
+ register: us_west_2
+
+ - name: 'Check that an AZ from us-west-2 has attributes we expect'
+ vars:
+ first_az: '{{ us_west_2.availability_zones[0] }}'
+ assert:
+ that:
+ - us_west_2 is successful
+ - '"availability_zones" in us_west_2'
+ - us_west_2.availability_zones | length == 1
+ - '"group_name" in first_az'
+ - '"messages" in first_az'
+ - '"network_border_group" in first_az'
+ - '"opt_in_status" in first_az'
+ - '"region_name" in first_az'
+ - '"state" in first_az'
+ - '"zone_id" in first_az'
+ - '"zone_name" in first_az'
+ # botocore >= 1.17.18
+ #- '"zone_type" in first_az'
+ - first_az.group_name == 'us-west-2'
+ - first_az.network_border_group == 'us-west-2'
+ - first_az.region_name == 'us-west-2'
+ # AZs are mapped to the 'real' AZs on a per-account basis
+ - first_az.zone_id.startswith('usw2-az')
+ - not first_az.zone_id == 'usw2-az'
+ - first_az.zone_name == 'us-west-2c'
+ # botocore >= 1.17.18
+ #- first_az.zone_type == 'availability-zone'
+
+ - name: 'Filter Available AZs in eu-central-1 using _ ("zone_name")'
+ aws_az_info:
+ region: 'eu-central-1'
+ filters:
+ zone_name: 'eu-central-1b'
+ register: eu_central_1
+
+ - name: 'Check that eu-central-1b has the attributes we expect'
+ vars:
+ first_az: '{{ eu_central_1.availability_zones[0] }}'
+ assert:
+ that:
+ - eu_central_1 is successful
+ - '"availability_zones" in eu_central_1'
+ - eu_central_1.availability_zones | length == 1
+ - '"group_name" in first_az'
+ - '"messages" in first_az'
+ - '"network_border_group" in first_az'
+ - '"opt_in_status" in first_az'
+ - '"region_name" in first_az'
+ - '"state" in first_az'
+ - '"zone_id" in first_az'
+ - '"zone_name" in first_az'
+ # botocore >= 1.17.18
+ #- '"zone_type" in first_az'
+ - first_az.group_name == 'eu-central-1'
+ - first_az.network_border_group == 'eu-central-1'
+ - first_az.region_name == 'eu-central-1'
+ # AZs are mapped to the 'real' AZs on a per-account basis
+ - first_az.zone_id.startswith('euc1-az')
+ - not first_az.zone_id == "euc1-az"
+ - first_az.zone_name == 'eu-central-1b'
+ # botocore >= 1.17.18
+ #- first_az.zone_type == 'availability-zone'
+
+ - name: 'Filter Available AZs in eu-west-2 using _ and - ("zone_name" and "zone-name") : _ wins '
+ aws_az_info:
+ region: 'eu-west-2'
+ filters:
+ zone-name: 'eu-west-2a'
+ zone_name: 'eu-west-2c'
+ register: eu_west_2
+
+ - name: 'Check that we get the AZ specified by zone_name rather than zone-name'
+ vars:
+ first_az: '{{ eu_west_2.availability_zones[0] }}'
+ assert:
+ that:
+ - eu_west_2 is successful
+ - '"availability_zones" in eu_west_2'
+ - eu_west_2.availability_zones | length == 1
+ - '"group_name" in first_az'
+ - '"messages" in first_az'
+ - '"network_border_group" in first_az'
+ - '"opt_in_status" in first_az'
+ - '"region_name" in first_az'
+ - '"state" in first_az'
+ - '"zone_id" in first_az'
+ - '"zone_name" in first_az'
+ # botocore >= 1.17.18
+ #- '"zone_type" in first_az'
+ - first_az.group_name == 'eu-west-2'
+ - first_az.network_border_group == 'eu-west-2'
+ - first_az.region_name == 'eu-west-2'
+ # AZs are mapped to the 'real' AZs on a per-account basis
+ - first_az.zone_id.startswith('euw2-az')
+ - not first_az.zone_id == "euw2-az"
+ - first_az.zone_name == 'eu-west-2c'
+ # botocore >= 1.17.18
+ #- first_az.zone_type == 'availability-zone'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/tests.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/tests.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/tests.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_az_info/tasks/tests.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,183 +0,0 @@
----
-- module_defaults:
- group/aws:
- aws_access_key: '{{ aws_access_key | default(omit) }}'
- aws_secret_key: '{{ aws_secret_key | default(omit) }}'
- security_token: '{{ security_token | default(omit) }}'
- region: '{{ aws_region | default(omit) }}'
-
- block:
- - name: 'List available AZs in current Region'
- aws_az_info:
- register: region_azs
-
- - name: check task return attributes
- vars:
- first_az: '{{ region_azs.availability_zones[0] }}'
- assert:
- that:
- - region_azs is successful
- - '"availability_zones" in region_azs'
- - '"group_name" in first_az'
- - '"messages" in first_az'
- - '"network_border_group" in first_az'
- - '"opt_in_status" in first_az'
- - '"region_name" in first_az'
- - '"state" in first_az'
- - '"zone_id" in first_az'
- - '"zone_name" in first_az'
- - '"zone_type" in first_az'
-
- - name: 'List available AZs in current Region - check_mode'
- aws_az_info:
- check_mode: yes
- register: check_azs
-
- - name: check task return attributes
- vars:
- first_az: '{{ check_azs.availability_zones[0] }}'
- assert:
- that:
- - check_azs is successful
- - '"availability_zones" in check_azs'
- - '"group_name" in first_az'
- - '"messages" in first_az'
- - '"network_border_group" in first_az'
- - '"opt_in_status" in first_az'
- - '"region_name" in first_az'
- - '"state" in first_az'
- - '"zone_id" in first_az'
- - '"zone_name" in first_az'
- - '"zone_type" in first_az'
-
-
- # Be specific - aws_region isn't guaranteed to be any specific value
- - name: 'List Available AZs in us-east-1'
- aws_az_info:
- region: 'us-east-1'
- register: us_east_1
-
- - name: 'Check that an AZ from us-east-1 has valid looking attributes'
- vars:
- first_az: '{{ us_east_1.availability_zones[0] }}'
- assert:
- that:
- - us_east_1 is successful
- - '"availability_zones" in us_east_1'
- - '"group_name" in first_az'
- - '"messages" in first_az'
- - '"network_border_group" in first_az'
- - '"opt_in_status" in first_az'
- - '"region_name" in first_az'
- - '"state" in first_az'
- - '"zone_id" in first_az'
- - '"zone_name" in first_az'
- - '"zone_type" in first_az'
- - first_az.group_name.startswith('us-east-1')
- - first_az.network_border_group.startswith('us-east-1')
- - first_az.region_name == 'us-east-1'
- - first_az.zone_id.startswith('use1-az')
- - not first_az.zone_id == "use1-az"
- - first_az.zone_name.startswith('us-east-1')
- - not first_az.zone_name == 'us-east-1'
- - first_az.zone_type == 'availability-zone'
-
- - name: 'Filter Available AZs in us-west-2 using - ("zone-name")'
- aws_az_info:
- region: 'us-west-2'
- filters:
- zone-name: 'us-west-2c'
- register: us_west_2
-
- - name: 'Check that an AZ from us-west-2 has attributes we expect'
- vars:
- first_az: '{{ us_west_2.availability_zones[0] }}'
- assert:
- that:
- - us_west_2 is successful
- - '"availability_zones" in us_west_2'
- - us_west_2.availability_zones | length == 1
- - '"group_name" in first_az'
- - '"messages" in first_az'
- - '"network_border_group" in first_az'
- - '"opt_in_status" in first_az'
- - '"region_name" in first_az'
- - '"state" in first_az'
- - '"zone_id" in first_az'
- - '"zone_name" in first_az'
- - '"zone_type" in first_az'
- - first_az.group_name == 'us-west-2'
- - first_az.network_border_group == 'us-west-2'
- - first_az.region_name == 'us-west-2'
- # AZs are mapped to the 'real' AZs on a per-account basis
- - first_az.zone_id.startswith('usw2-az')
- - not first_az.zone_id == 'usw2-az'
- - first_az.zone_name == 'us-west-2c'
- - first_az.zone_type == 'availability-zone'
-
- - name: 'Filter Available AZs in eu-central-1 using _ ("zone_name")'
- aws_az_info:
- region: 'eu-central-1'
- filters:
- zone_name: 'eu-central-1b'
- register: eu_central_1
-
- - name: 'Check that eu-central-1b has the attributes we expect'
- vars:
- first_az: '{{ eu_central_1.availability_zones[0] }}'
- assert:
- that:
- - eu_central_1 is successful
- - '"availability_zones" in eu_central_1'
- - eu_central_1.availability_zones | length == 1
- - '"group_name" in first_az'
- - '"messages" in first_az'
- - '"network_border_group" in first_az'
- - '"opt_in_status" in first_az'
- - '"region_name" in first_az'
- - '"state" in first_az'
- - '"zone_id" in first_az'
- - '"zone_name" in first_az'
- - '"zone_type" in first_az'
- - first_az.group_name == 'eu-central-1'
- - first_az.network_border_group == 'eu-central-1'
- - first_az.region_name == 'eu-central-1'
- # AZs are mapped to the 'real' AZs on a per-account basis
- - first_az.zone_id.startswith('euc1-az')
- - not first_az.zone_id == "euc1-az"
- - first_az.zone_name == 'eu-central-1b'
- - first_az.zone_type == 'availability-zone'
-
- - name: 'Filter Available AZs in eu-west-2 using _ and - ("zone_name" and "zone-name") : _ wins '
- aws_az_info:
- region: 'eu-west-2'
- filters:
- zone-name: 'eu-west-2a'
- zone_name: 'eu-west-2c'
- register: eu_west_2
-
- - name: 'Check that we get the AZ specified by zone_name rather than zone-name'
- vars:
- first_az: '{{ eu_west_2.availability_zones[0] }}'
- assert:
- that:
- - eu_west_2 is successful
- - '"availability_zones" in eu_west_2'
- - eu_west_2.availability_zones | length == 1
- - '"group_name" in first_az'
- - '"messages" in first_az'
- - '"network_border_group" in first_az'
- - '"opt_in_status" in first_az'
- - '"region_name" in first_az'
- - '"state" in first_az'
- - '"zone_id" in first_az'
- - '"zone_name" in first_az'
- - '"zone_type" in first_az'
- - first_az.group_name == 'eu-west-2'
- - first_az.network_border_group == 'eu-west-2'
- - first_az.region_name == 'eu-west-2'
- # AZs are mapped to the 'real' AZs on a per-account basis
- - first_az.zone_id.startswith('euw2-az')
- - not first_az.zone_id == "euw2-az"
- - first_az.zone_name == 'eu-west-2c'
- - first_az.zone_type == 'availability-zone'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_caller_info/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_caller_info/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_caller_info/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_caller_info/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group4
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,5 @@
---
# defaults file for s3
-bucket_name: '{{resource_prefix}}'
+bucket_name: '{{ resource_prefix | hash("md5") }}'
+bucket_name_acl: "{{ bucket_name + '-with-acl' }}"
+bucket_name_with_dot: "{{ bucket_name + '.bucket' }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/copy_object.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/copy_object.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/copy_object.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/copy_object.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,119 @@
+- block:
+ - name: define bucket name used for tests
+ set_fact:
+ copy_bucket:
+ src: "{{ bucket_name }}-copysrc"
+ dst: "{{ bucket_name }}-copydst"
+
+ - name: create bucket source
+ aws_s3:
+ bucket: "{{ copy_bucket.src }}"
+ mode: create
+
+ - name: Create content
+ set_fact:
+ content: "{{ lookup('password', '/dev/null chars=ascii_letters,digits,hexdigits,punctuation') }}"
+
+ - name: Put a content in the source bucket
+ aws_s3:
+ bucket: "{{ copy_bucket.src }}"
+ mode: put
+ content: "{{ content }}"
+ object: source.txt
+ tags:
+ ansible_release: '2.0.0'
+ ansible_team: cloud
+ retries: 3
+ delay: 3
+ register: put_result
+ until: "put_result.msg == 'PUT operation complete'"
+
+ - name: Copy the content of the source bucket into dest bucket
+ aws_s3:
+ bucket: "{{ copy_bucket.dst }}"
+ mode: copy
+ object: destination.txt
+ copy_src:
+ bucket: "{{ copy_bucket.src }}"
+ object: source.txt
+
+ - name: Get the content copied into {{ copy_bucket.dst }}
+ aws_s3:
+ bucket: "{{ copy_bucket.dst }}"
+ mode: getstr
+ object: destination.txt
+ register: copy_content
+
+ - name: assert that the content is matching with the source
+ assert:
+ that:
+ - content == copy_content.contents
+
+ - name: Get the download url for object copied into {{ copy_bucket.dst }}
+ aws_s3:
+ bucket: "{{ copy_bucket.dst }}"
+ mode: geturl
+ object: destination.txt
+ register: copy_url
+
+ - name: assert that tags are the same in the destination bucket
+ assert:
+ that:
+ - put_result.tags == copy_url.tags
+
+ - name: Copy the same content from the source bucket into dest bucket (idempotency)
+ aws_s3:
+ bucket: "{{ copy_bucket.dst }}"
+ mode: copy
+ object: destination.txt
+ copy_src:
+ bucket: "{{ copy_bucket.src }}"
+ object: source.txt
+ register: copy_idempotency
+
+ - name: assert that no change was made
+ assert:
+ that:
+ - copy_idempotency is not changed
+ - "copy_idempotency.msg == 'ETag from source and destination are the same'"
+
+ - name: Copy object with tags
+ aws_s3:
+ bucket: "{{ copy_bucket.dst }}"
+ mode: copy
+ object: destination.txt
+ tags:
+ ansible_release: "2.0.1"
+ copy_src:
+ bucket: "{{ copy_bucket.src }}"
+ object: source.txt
+ register: copy_result
+
+ - name: assert that tags were updated
+ assert:
+ that:
+ - copy_result is changed
+ - copy_result.tags['ansible_release'] == '2.0.1'
+
+ - name: Copy object with tags (idempotency)
+ aws_s3:
+ bucket: "{{ copy_bucket.dst }}"
+ mode: copy
+ object: destination.txt
+ tags:
+ ansible_release: "2.0.1"
+ copy_src:
+ bucket: "{{ copy_bucket.src }}"
+ object: source.txt
+ register: copy_result
+
+ - name: assert that no change was made
+ assert:
+ that:
+ - copy_result is not changed
+
+ always:
+ - include_tasks: delete_bucket.yml
+ with_items:
+ - "{{ copy_bucket.dst }}"
+ - "{{ copy_bucket.src }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/delete_bucket.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/delete_bucket.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/delete_bucket.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/delete_bucket.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,24 @@
+- name: delete bucket at the end of Integration tests
+ block:
+ - name: list bucket object
+ aws_s3:
+ bucket: "{{ item }}"
+ mode: list
+ register: objects
+ ignore_errors: true
+
+ - name: remove objects from bucket
+ aws_s3:
+ bucket: "{{ item }}"
+ mode: delobj
+ object: "{{ obj }}"
+ with_items: "{{ objects.s3_keys }}"
+ loop_control:
+ loop_var: obj
+ ignore_errors: true
+
+ - name: delete the bucket
+ aws_s3:
+ bucket: "{{ item }}"
+ mode: delete
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -8,6 +8,14 @@
region: "{{ aws_region }}"
block:
+ - name: get ARN of calling user
+ aws_caller_info:
+ register: aws_caller_info
+
+ - name: register account id
+ set_fact:
+ aws_account: "{{ aws_caller_info.account }}"
+
- name: Create temporary directory
tempfile:
state: directory
@@ -30,6 +38,17 @@
- result is failed
- "result.msg != 'MODULE FAILURE'"
+ - name: test create bucket with an invalid name
+ aws_s3:
+ bucket: "{{ bucket_name }}-"
+ mode: create
+ register: result
+ ignore_errors: yes
+
+ - assert:
+ that:
+ - result is failed
+
- name: test create bucket
aws_s3:
bucket: "{{ bucket_name }}"
@@ -444,7 +463,7 @@
- name: test create a bucket with a dot in the name
aws_s3:
- bucket: "{{ bucket_name | hash('md5') + '.bucket' }}"
+ bucket: "{{ bucket_name_with_dot }}"
mode: create
register: result
@@ -454,7 +473,7 @@
- name: test delete a bucket with a dot in the name
aws_s3:
- bucket: "{{ bucket_name | hash('md5') + '.bucket' }}"
+ bucket: "{{ bucket_name_with_dot }}"
mode: delete
register: result
@@ -464,7 +483,7 @@
- name: test delete a nonexistent bucket
aws_s3:
- bucket: "{{ bucket_name | hash('md5') + '.bucket' }}"
+ bucket: "{{ bucket_name_with_dot }}"
mode: delete
register: result
@@ -526,6 +545,47 @@
- result is not changed
when: ansible_system == 'Linux' or ansible_distribution == 'MacOSX'
+ - name: make a bucket with the bucket-owner-full-control ACL
+ s3_bucket:
+ name: "{{ bucket_name_acl }}"
+ state: present
+ policy: "{{ lookup('template', 'policy.json.j2') }}"
+ register: bucket_with_policy
+
+ - assert:
+ that:
+ - bucket_with_policy is changed
+
+ - name: fail to upload the file to the bucket with an ACL
+ aws_s3:
+ bucket: "{{ bucket_name_acl }}"
+ mode: put
+ src: "{{ tmpdir.path }}/upload.txt"
+ object: file-with-permissions.txt
+ permission: private
+ ignore_nonexistent_bucket: True
+ register: upload_private
+ ignore_errors: True
+
+ # XXX Doesn't fail...
+ # - assert:
+ # that:
+ # - upload_private is failed
+
+ - name: upload the file to the bucket with an ACL
+ aws_s3:
+ bucket: "{{ bucket_name_acl }}"
+ mode: put
+ src: "{{ tmpdir.path }}/upload.txt"
+ object: file-with-permissions.txt
+ permission: bucket-owner-full-control
+ ignore_nonexistent_bucket: True
+ register: upload_owner
+
+ - assert:
+ that:
+ - upload_owner is changed
+
- name: create an object from static content
aws_s3:
bucket: "{{ bucket_name }}"
@@ -628,21 +688,253 @@
that:
- binary_files.results[0].stat.checksum == binary_files.results[1].stat.checksum
+ - include_tasks: copy_object.yml
+
+ # ============================================================
+ - name: 'Run tagging tests'
+ block:
+ # ============================================================
+ - name: create an object from static content
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ content: >-
+ test content
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ "Tag Two": 'two {{ resource_prefix }}'
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - "'tags' in result"
+ - (result.tags | length) == 2
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+ - result.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+ - name: ensure idempotency on static content
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ "Tag Two": 'two {{ resource_prefix }}'
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - "'tags' in result"
+ - (result.tags | length) == 2
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+ - result.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+ - name: Remove a tag from an S3 object
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ content: >-
+ test content
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - "'tags' in result"
+ - (result.tags | length) == 1
+ - result.tags["tag_one"] == "{{ resource_prefix }} One"
+ - "'Tag Two' not in result.tags"
+
+ - name: Remove the tag from an S3 object (idempotency)
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ content: >-
+ test content
+ overwrite: different
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - "'tags' in result"
+ - (result.tags | length) == 1
+ - result.tags["tag_one"] == "{{ resource_prefix }} One"
+ - "'Tag Two' not in result.tags"
+
+ - name: Add a tag for an S3 object with purge_tags False
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags:
+ tag_three: '{{ resource_prefix }} Three'
+ purge_tags: no
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - "'tags' in result"
+ - (result.tags | length) == 2
+ - result.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+
+ - name: Add a tag for an S3 object with purge_tags False (idempotency)
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags:
+ tag_three: '{{ resource_prefix }} Three'
+ purge_tags: no
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - "'tags' in result"
+ - (result.tags | length) == 2
+ - result.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+
+ - name: Update tags for an S3 object with purge_tags False
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags:
+ "TagFour": '{{ resource_prefix }} tag_four'
+ purge_tags: no
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - "'tags' in result"
+ - (result.tags | length) == 3
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+ - result.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - result.tags["TagFour"] == '{{ resource_prefix }} tag_four'
+
+ - name: Update tags for an S3 object with purge_tags False (idempotency)
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags:
+ "TagFour": '{{ resource_prefix }} tag_four'
+ purge_tags: no
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - "'tags' in result"
+ - (result.tags | length) == 3
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+ - result.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - result.tags["TagFour"] == '{{ resource_prefix }} tag_four'
+
+ - name: Specify empty tags for an S3 object with purge_tags False
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags: {}
+ purge_tags: no
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - "'tags' in result"
+ - (result.tags | length) == 3
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+ - result.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - result.tags["TagFour"] == '{{ resource_prefix }} tag_four'
+
+ - name: Do not specify any tag to ensure previous tags are not removed
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - "'tags' in result"
+ - (result.tags | length) == 3
+ - result.tags["tag_one"] == '{{ resource_prefix }} One'
+ - result.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - result.tags["TagFour"] == '{{ resource_prefix }} tag_four'
+
+ - name: Remove all tags
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags: {}
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - "'tags' in result"
+ - (result.tags | length) == 0
+
+ - name: Remove all tags (idempotency)
+ aws_s3:
+ bucket: "{{ bucket_name }}"
+ object: put-content.txt
+ mode: put
+ overwrite: different
+ content: >-
+ test content
+ tags: {}
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - "'tags' in result"
+ - (result.tags | length) == 0
+
always:
- - name: remove uploaded files
- aws_s3:
- bucket: "{{ bucket_name }}"
- mode: delobj
- object: "{{ item }}"
- loop:
- - hello.txt
- - delete.txt
- - delete_encrypt.txt
- - delete_encrypt_kms.txt
- - put-content.txt
- - put-template.txt
- - put-binary.txt
- ignore_errors: yes
- name: delete temporary files
file:
@@ -650,14 +942,8 @@
path: "{{ tmpdir.path }}"
ignore_errors: yes
- - name: delete the bucket
- aws_s3:
- bucket: "{{ bucket_name }}"
- mode: delete
- ignore_errors: yes
-
- - name: delete the dot bucket
- aws_s3:
- bucket: "{{ bucket_name + '.bucket' }}"
- mode: delete
- ignore_errors: yes
+ - include_tasks: delete_bucket.yml
+ with_items:
+ - "{{ bucket_name }}"
+ - "{{ bucket_name_with_dot }}"
+ - "{{ bucket_name_acl }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/templates/policy.json.j2 ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/templates/policy.json.j2
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/templates/policy.json.j2 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/aws_s3/templates/policy.json.j2 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,21 @@
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "Only allow writes to my bucket with bucket owner full control",
+ "Effect": "Allow",
+ "Principal": { "AWS":"{{ aws_account }}" },
+ "Action": [
+ "s3:PutObject"
+ ],
+ "Resource": [
+ "arn:aws:s3:::{{ bucket_name_acl }}/*"
+ ],
+ "Condition": {
+ "StringEquals": {
+ "s3:x-amz-acl": "bucket-owner-full-control"
+ }
+ }
+ }
+ ]
+}
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,2 @@
cloud/aws
-shippable/aws/group2
cloudformation_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,8 +1,8 @@
stack_name: "{{ resource_prefix }}"
+availability_zone: '{{ ec2_availability_zone_names[0] }}'
+
vpc_name: '{{ resource_prefix }}-vpc'
vpc_seed: '{{ resource_prefix }}'
vpc_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.0.0/16'
subnet_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.32.0/24'
-
-ec2_ami_name: 'amzn2-ami-hvm-2.*-x86_64-gp2'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+dependencies:
+- role: setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/cloudformation/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,5 +1,4 @@
---
-
- module_defaults:
group/aws:
aws_access_key: '{{ aws_access_key | default(omit) }}'
@@ -10,13 +9,6 @@
block:
# ==== Env setup ==========================================================
- - name: list available AZs
- aws_az_info:
- register: region_azs
-
- - name: pick an AZ for testing
- set_fact:
- availability_zone: "{{ region_azs.availability_zones[0].zone_name }}"
- name: Create a test VPC
ec2_vpc_net:
@@ -33,19 +25,6 @@
az: "{{ availability_zone }}"
register: testing_subnet
- - name: Find AMI to use
- ec2_ami_info:
- owners: 'amazon'
- filters:
- name: '{{ ec2_ami_name }}'
- register: ec2_amis
-
- - name: Set fact with latest AMI
- vars:
- latest_ami: '{{ ec2_amis.images | sort(attribute="creation_date") | last }}'
- set_fact:
- ec2_ami_image: '{{ latest_ami.image_id }}'
-
# ==== Cloudformation tests ===============================================
# 1. Basic stack creation (check mode, actual run and idempotency)
@@ -68,7 +47,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -88,7 +67,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -110,7 +89,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -129,7 +108,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -221,7 +200,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.micro"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -266,7 +245,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -288,7 +267,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -329,7 +308,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -372,7 +351,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
@@ -391,7 +370,7 @@
template_body: "{{ lookup('file','cf_template.json') }}"
template_parameters:
InstanceType: "t3.nano"
- ImageId: "{{ ec2_ami_image }}"
+ ImageId: "{{ ec2_ami_id }}"
SubnetId: "{{ testing_subnet.subnet.id }}"
tags:
Stack: "{{ stack_name }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group4
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,5 @@
---
+availability_zone: '{{ ec2_availability_zone_names[0] }}'
+
vpc_cidr: '10.{{ 256 | random(seed=resource_prefix) }}.0.0/16'
subnet_cidr: '10.{{ 256 | random(seed=resource_prefix) }}.1.0/24'
-ec2_ami_name: 'amzn2-ami-hvm-2.*-x86_64-gp2'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/meta/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,4 @@
dependencies:
- prepare_tests
- setup_ec2
+ - setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -11,13 +11,6 @@
block:
# SETUP: vpc, ec2 key pair, subnet, security group, ec2 instance
- - name: list available AZs
- aws_az_info:
- register: region_azs
-
- - name: pick an AZ for testing
- set_fact:
- availability_zone: "{{ region_azs.availability_zones[0].zone_name }}"
- name: create a VPC to work in
ec2_vpc_net:
@@ -53,19 +46,6 @@
vpc_id: '{{ setup_vpc.vpc.id }}'
register: setup_sg
- - name: Find AMI to use
- ec2_ami_info:
- owners: 'amazon'
- filters:
- name: '{{ ec2_ami_name }}'
- register: ec2_amis
-
- - name: Set fact with latest AMI
- vars:
- latest_ami: '{{ ec2_amis.images | sort(attribute="creation_date") | last }}'
- set_fact:
- ec2_ami_image: '{{ latest_ami.image_id }}'
-
# ============================================================
- name: test first instance is started
@@ -73,7 +53,7 @@
instance_type: t2.micro
key_name: '{{ setup_key.key.name }}'
state: present
- image: '{{ ec2_ami_image }}'
+ image: '{{ ec2_ami_id }}'
wait: yes
instance_tags:
ResourcePrefix: '{{ resource_prefix }}-integration_tests'
@@ -86,7 +66,7 @@
instance_type: t2.micro
key_name: '{{ setup_key.key.name }}'
state: present
- image: '{{ ec2_ami_image }}'
+ image: '{{ ec2_ami_id }}'
wait: yes
instance_tags:
ResourcePrefix: '{{ resource_prefix }}-another_tag'
@@ -189,7 +169,7 @@
- name: remove setup subnet
ec2_vpc_subnet:
- az: '{{ ec2_region }}a'
+ az: '{{ availability_zone }}'
tags: '{{resource_prefix}}_setup'
vpc_id: '{{ setup_vpc.vpc.id }}'
cidr: '{{ subnet_cidr }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,5 @@
+# duration: 15
+slow
+
cloud/aws
-shippable/aws/group3
ec2_ami_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,11 +1,11 @@
---
+availability_zone: '{{ ec2_availability_zone_names[0] }}'
+
# defaults file for test_ec2_ami
ec2_ami_name: '{{resource_prefix}}'
ec2_ami_description: 'Created by ansible integration tests'
-# image for Amazon Linux AMI 2017.03.1 (HVM), SSD Volume Type
-ec2_ami_image:
- us-east-1: ami-4fffc834
- us-east-2: ami-ea87a78f
+
+ec2_ami_image: '{{ ec2_ami_id }}'
vpc_cidr: '10.{{ 256 | random(seed=resource_prefix) }}.0.0/16'
subnet_cidr: '10.{{ 256 | random(seed=resource_prefix) }}.1.0/24'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/meta/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,4 @@
dependencies:
- prepare_tests
- setup_ec2
+ - setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_ami/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -28,7 +28,7 @@
- name: create a subnet to use for creating an ec2 instance
ec2_vpc_subnet:
- az: '{{ ec2_region }}a'
+ az: '{{ availability_zone }}'
tags: '{{ ec2_ami_name }}_setup'
vpc_id: '{{ setup_vpc.vpc.id }}'
cidr: '{{ subnet_cidr }}'
@@ -46,22 +46,29 @@
register: setup_sg
- name: provision ec2 instance to create an image
- ec2:
+ ec2_instance:
+ state: running
key_name: '{{ setup_key.key.name }}'
instance_type: t2.micro
- state: present
- image: '{{ ec2_region_images[ec2_region] }}'
- wait: yes
- instance_tags:
+ image_id: '{{ ec2_ami_id }}'
+ tags:
'{{ec2_ami_name}}_instance_setup': 'integration_tests'
- group_id: '{{ setup_sg.group_id }}'
+ security_group: '{{ setup_sg.group_id }}'
vpc_subnet_id: '{{ setup_subnet.subnet.id }}'
+ volumes:
+ - device_name: /dev/sdc
+ virtual_name: ephemeral1
+ wait: yes
register: setup_instance
+ - name: Store EC2 Instance ID
+ set_fact:
+ ec2_instance_id: '{{ setup_instance.instances[0].instance_id }}'
+
- name: take a snapshot of the instance to create an image
ec2_snapshot:
- instance_id: '{{ setup_instance.instance_ids[0] }}'
- device_name: /dev/xvda
+ instance_id: '{{ ec2_instance_id }}'
+ device_name: '{{ ec2_ami_root_disk }}'
state: present
register: setup_snapshot
@@ -69,13 +76,13 @@
- name: test clean failure if not providing image_id or name with state=present
ec2_ami:
- instance_id: '{{ setup_instance.instance_ids[0] }}'
+ instance_id: '{{ ec2_instance_id }}'
state: present
description: '{{ ec2_ami_description }}'
tags:
Name: '{{ ec2_ami_name }}_ami'
wait: yes
- root_device_name: /dev/xvda
+ root_device_name: '{{ ec2_ami_root_disk }}'
register: result
ignore_errors: yes
@@ -87,16 +94,34 @@
# ============================================================
+ - name: create an image from the instance (check mode)
+ ec2_ami:
+ instance_id: '{{ ec2_instance_id }}'
+ state: present
+ name: '{{ ec2_ami_name }}_ami'
+ description: '{{ ec2_ami_description }}'
+ tags:
+ Name: '{{ ec2_ami_name }}_ami'
+ wait: yes
+ root_device_name: '{{ ec2_ami_root_disk }}'
+ check_mode: true
+ register: check_mode_result
+
+ - name: assert that check_mode result is changed
+ assert:
+ that:
+ - check_mode_result is changed
+
- name: create an image from the instance
ec2_ami:
- instance_id: '{{ setup_instance.instance_ids[0] }}'
+ instance_id: '{{ ec2_instance_id }}'
state: present
name: '{{ ec2_ami_name }}_ami'
description: '{{ ec2_ami_description }}'
tags:
Name: '{{ ec2_ami_name }}_ami'
wait: yes
- root_device_name: /dev/xvda
+ root_device_name: '{{ ec2_ami_root_disk }}'
register: result
- name: set image id fact for deletion later
@@ -110,6 +135,93 @@
- "result.image_id.startswith('ami-')"
- "'Name' in result.tags and result.tags.Name == ec2_ami_name + '_ami'"
+ - name: get related snapshot info and ensure the tags have been propagated
+ ec2_snapshot_info:
+ snapshot_ids:
+ - "{{ result.block_device_mapping[ec2_ami_root_disk].snapshot_id }}"
+ register: snapshot_result
+
+ - name: ensure the tags have been propagated to the snapshot
+ assert:
+ that:
+ - "'tags' in snapshot_result.snapshots[0]"
+ - "'Name' in snapshot_result.snapshots[0].tags and snapshot_result.snapshots[0].tags.Name == ec2_ami_name + '_ami'"
+
+ # ============================================================
+
+ - name: create an image from the instance with attached devices with no_device true (check mode)
+ ec2_ami:
+ name: '{{ ec2_ami_name }}_no_device_true_ami'
+ instance_id: '{{ ec2_instance_id }}'
+ device_mapping:
+ - device_name: /dev/sda1
+ volume_size: 10
+ delete_on_termination: true
+ volume_type: gp2
+ - device_name: /dev/sdf
+ no_device: yes
+ state: present
+ wait: yes
+ root_device_name: '{{ ec2_ami_root_disk }}'
+ check_mode: true
+ register: check_mode_result
+
+ - name: assert that check_mode result is changed
+ assert:
+ that:
+ - check_mode_result is changed
+
+ - name: create an image from the instance with attached devices with no_device true
+ ec2_ami:
+ name: '{{ ec2_ami_name }}_no_device_true_ami'
+ instance_id: '{{ ec2_instance_id }}'
+ device_mapping:
+ - device_name: /dev/sda1
+ volume_size: 10
+ delete_on_termination: true
+ volume_type: gp2
+ - device_name: /dev/sdf
+ no_device: yes
+ state: present
+ wait: yes
+ root_device_name: '{{ ec2_ami_root_disk }}'
+ register: result_no_device_true
+
+ - name: set image id fact for deletion later
+ set_fact:
+ ec2_ami_no_device_true_image_id: "{{ result_no_device_true.image_id }}"
+
+ - name: assert that image with no_device option yes has been created
+ assert:
+ that:
+ - "result_no_device_true.changed"
+ - "'/dev/sdf' not in result_no_device_true.block_device_mapping"
+
+ - name: create an image from the instance with attached devices with no_device false
+ ec2_ami:
+ name: '{{ ec2_ami_name }}_no_device_false_ami'
+ instance_id: '{{ ec2_instance_id }}'
+ device_mapping:
+ - device_name: /dev/sda1
+ volume_size: 10
+ delete_on_termination: true
+ volume_type: gp2
+ no_device: no
+ state: present
+ wait: yes
+ root_device_name: '{{ ec2_ami_root_disk }}'
+ register: result_no_device_false
+
+ - name: set image id fact for deletion later
+ set_fact:
+ ec2_ami_no_device_false_image_id: "{{ result_no_device_false.image_id }}"
+
+ - name: assert that image with no_device option no has been created
+ assert:
+ that:
+ - "result_no_device_false.changed"
+ - "'/dev/sda1' in result_no_device_false.block_device_mapping"
+
# ============================================================
- name: gather facts about the image created
@@ -185,9 +297,29 @@
# e2_ami_info filtering tests ends
# ============================================================
+ - name: delete the image (check mode)
+ ec2_ami:
+ instance_id: '{{ ec2_instance_id }}'
+ state: absent
+ delete_snapshot: yes
+ name: '{{ ec2_ami_name }}_ami'
+ description: '{{ ec2_ami_description }}'
+ image_id: '{{ result.image_id }}'
+ tags:
+ Name: '{{ ec2_ami_name }}_ami'
+ wait: yes
+ ignore_errors: true
+ check_mode: true
+ register: check_mode_result
+
+ - name: assert that check_mode result is changed
+ assert:
+ that:
+ - check_mode_result is changed
+
- name: delete the image
ec2_ami:
- instance_id: '{{ setup_instance.instance_ids[0] }}'
+ instance_id: '{{ ec2_instance_id }}'
state: absent
delete_snapshot: yes
name: '{{ ec2_ami_name }}_ami'
@@ -231,9 +363,9 @@
user_ids: []
tags:
Name: '{{ ec2_ami_name }}_ami'
- root_device_name: /dev/xvda
+ root_device_name: '{{ ec2_ami_root_disk }}'
device_mapping:
- - device_name: /dev/xvda
+ - device_name: '{{ ec2_ami_root_disk }}'
volume_type: gp2
size: 8
delete_on_termination: true
@@ -244,7 +376,7 @@
- name: set image id fact for deletion later
set_fact:
ec2_ami_image_id: "{{ result.image_id }}"
- ec2_ami_snapshot: "{{ result.block_device_mapping['/dev/xvda'].snapshot_id }}"
+ ec2_ami_snapshot: "{{ result.block_device_mapping[ec2_ami_root_disk].snapshot_id }}"
- name: assert a new ami has been created
assert:
@@ -254,6 +386,31 @@
# ============================================================
+ - name: test default launch permissions idempotence (check mode)
+ ec2_ami:
+ description: '{{ ec2_ami_description }}'
+ state: present
+ name: '{{ ec2_ami_name }}_ami'
+ tags:
+ Name: '{{ ec2_ami_name }}_ami'
+ root_device_name: '{{ ec2_ami_root_disk }}'
+ image_id: '{{ result.image_id }}'
+ launch_permissions:
+ user_ids: []
+ device_mapping:
+ - device_name: '{{ ec2_ami_root_disk }}'
+ volume_type: gp2
+ size: 8
+ delete_on_termination: true
+ snapshot_id: '{{ setup_snapshot.snapshot_id }}'
+ check_mode: true
+ register: check_mode_result
+
+ - name: assert that check_mode result is not changed
+ assert:
+ that:
+ - check_mode_result is not changed
+
- name: test default launch permissions idempotence
ec2_ami:
description: '{{ ec2_ami_description }}'
@@ -261,12 +418,12 @@
name: '{{ ec2_ami_name }}_ami'
tags:
Name: '{{ ec2_ami_name }}_ami'
- root_device_name: /dev/xvda
+ root_device_name: '{{ ec2_ami_root_disk }}'
image_id: '{{ result.image_id }}'
launch_permissions:
user_ids: []
device_mapping:
- - device_name: /dev/xvda
+ - device_name: '{{ ec2_ami_root_disk }}'
volume_type: gp2
size: 8
delete_on_termination: true
@@ -316,6 +473,23 @@
# ============================================================
+ - name: update AMI launch permissions (check mode)
+ ec2_ami:
+ state: present
+ image_id: '{{ result.image_id }}'
+ description: '{{ ec2_ami_description }}'
+ tags:
+ Name: '{{ ec2_ami_name }}_ami'
+ launch_permissions:
+ group_names: ['all']
+ check_mode: true
+ register: check_mode_result
+
+ - name: assert that check_mode result is changed
+ assert:
+ that:
+ - check_mode_result is changed
+
- name: update AMI launch permissions
ec2_ami:
state: present
@@ -334,6 +508,24 @@
# ============================================================
+ - name: modify the AMI description (check mode)
+ ec2_ami:
+ state: present
+ image_id: '{{ result.image_id }}'
+ name: '{{ ec2_ami_name }}_ami'
+ description: '{{ ec2_ami_description }}CHANGED'
+ tags:
+ Name: '{{ ec2_ami_name }}_ami'
+ launch_permissions:
+ group_names: ['all']
+ check_mode: true
+ register: check_mode_result
+
+ - name: assert that check_mode result is changed
+ assert:
+ that:
+ - check_mode_result is changed
+
- name: modify the AMI description
ec2_ami:
state: present
@@ -373,7 +565,7 @@
- name: delete ami without deleting the snapshot (default is not to delete)
ec2_ami:
- instance_id: '{{ setup_instance.instance_ids[0] }}'
+ instance_id: '{{ ec2_instance_id }}'
state: absent
name: '{{ ec2_ami_name }}_ami'
image_id: '{{ ec2_ami_image_id }}'
@@ -400,9 +592,26 @@
that:
- "snapshot_result.snapshots[0].snapshot_id == ec2_ami_snapshot"
+ - name: delete ami for a second time (check mode)
+ ec2_ami:
+ instance_id: '{{ ec2_instance_id }}'
+ state: absent
+ name: '{{ ec2_ami_name }}_ami'
+ image_id: '{{ ec2_ami_image_id }}'
+ tags:
+ Name: '{{ ec2_ami_name }}_ami'
+ wait: yes
+ check_mode: true
+ register: check_mode_result
+
+ - name: assert that check_mode result is not changed
+ assert:
+ that:
+ - check_mode_result is not changed
+
- name: delete ami for a second time
ec2_ami:
- instance_id: '{{ setup_instance.instance_ids[0] }}'
+ instance_id: '{{ ec2_instance_id }}'
state: absent
name: '{{ ec2_ami_name }}_ami'
image_id: '{{ ec2_ami_image_id }}'
@@ -437,6 +646,20 @@
wait: yes
ignore_errors: yes
+ - name: delete ami
+ ec2_ami:
+ state: absent
+ image_id: "{{ ec2_ami_no_device_true_image_id }}"
+ wait: yes
+ ignore_errors: yes
+
+ - name: delete ami
+ ec2_ami:
+ state: absent
+ image_id: "{{ ec2_ami_no_device_false_image_id }}"
+ wait: yes
+ ignore_errors: yes
+
- name: remove setup snapshot of ec2 instance
ec2_snapshot:
state: absent
@@ -444,15 +667,11 @@
ignore_errors: yes
- name: remove setup ec2 instance
- ec2:
- instance_type: t2.micro
- instance_ids: '{{ setup_instance.instance_ids }}'
+ ec2_instance:
state: absent
- wait: yes
- instance_tags:
- '{{ec2_ami_name}}_instance_setup': 'integration_tests'
- group_id: '{{ setup_sg.group_id }}'
- vpc_subnet_id: '{{ setup_subnet.subnet.id }}'
+ instance_ids:
+ - '{{ ec2_instance_id }}'
+ wait: true
ignore_errors: yes
- name: remove setup keypair
@@ -471,7 +690,7 @@
- name: remove setup subnet
ec2_vpc_subnet:
- az: '{{ ec2_region }}a'
+ az: '{{ availability_zone }}'
tags: '{{ec2_ami_name}}_setup'
vpc_id: '{{ setup_vpc.vpc.id }}'
cidr: '{{ subnet_cidr }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/aliases 1970-01-01 00:00:00.000000000 +0000
@@ -1,2 +0,0 @@
-cloud/aws
-shippable/aws/group2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,3 +0,0 @@
----
-# defaults file for test_ec2_eip
-tag_prefix: '{{resource_prefix}}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/meta/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,3 +0,0 @@
-dependencies:
- - prepare_tests
- - setup_ec2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,334 +0,0 @@
----
-# __Test Info__
-# Create a self signed cert and upload it to AWS
-# http://www.akadia.com/services/ssh_test_certificate.html
-# http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html
-
-# __Test Outline__
-#
-# __ec2_elb_lb__
-# create test elb with listeners and certificate
-# change AZ's
-# change listeners
-# remove listeners
-# remove elb
-
-# __ec2-common__
-# test environment variable EC2_REGION
-# test with no parameters
-# test with only instance_id
-# test invalid region parameter
-# test valid region parameter
-# test invalid ec2_url parameter
-# test valid ec2_url parameter
-# test credentials from environment
-# test credential parameters
-
-- module_defaults:
- group/aws:
- region: "{{ aws_region }}"
- aws_access_key: "{{ aws_access_key }}"
- aws_secret_key: "{{ aws_secret_key }}"
- security_token: "{{ security_token | default(omit) }}"
- block:
-
- # ============================================================
- # create test elb with listeners, certificate, and health check
-
- - name: Create ELB
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- zones:
- - "{{ aws_region }}a"
- - "{{ aws_region }}b"
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- - protocol: http
- load_balancer_port: 8080
- instance_port: 8080
- health_check:
- ping_protocol: http
- ping_port: 80
- ping_path: "/index.html"
- response_timeout: 5
- interval: 30
- unhealthy_threshold: 2
- healthy_threshold: 10
- register: info
-
- - assert:
- that:
- - 'info.changed'
- - 'info.elb.status == "created"'
- - '"{{ aws_region }}a" in info.elb.zones'
- - '"{{ aws_region }}b" in info.elb.zones'
- - 'info.elb.health_check.healthy_threshold == 10'
- - 'info.elb.health_check.interval == 30'
- - 'info.elb.health_check.target == "HTTP:80/index.html"'
- - 'info.elb.health_check.timeout == 5'
- - 'info.elb.health_check.unhealthy_threshold == 2'
- - '[80, 80, "HTTP", "HTTP"] in info.elb.listeners'
- - '[8080, 8080, "HTTP", "HTTP"] in info.elb.listeners'
-
- # ============================================================
-
- # check ports, would be cool, but we are at the mercy of AWS
- # to start things in a timely manner
-
- #- name: check to make sure 80 is listening
- # wait_for: host={{ info.elb.dns_name }} port=80 timeout=600
- # register: result
-
- #- name: assert can connect to port#
- # assert: 'result.state == "started"'
-
- #- name: check to make sure 443 is listening
- # wait_for: host={{ info.elb.dns_name }} port=443 timeout=600
- # register: result
-
- #- name: assert can connect to port#
- # assert: 'result.state == "started"'
-
- # ============================================================
-
- # Change AZ's
-
- - name: Change AZ's
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- zones:
- - "{{ aws_region }}c"
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- purge_zones: yes
- health_check:
- ping_protocol: http
- ping_port: 80
- ping_path: "/index.html"
- response_timeout: 5
- interval: 30
- unhealthy_threshold: 2
- healthy_threshold: 10
- register: info
-
-
-
- - assert:
- that:
- - 'info.elb.status == "ok"'
- - 'info.changed'
- - 'info.elb.zones[0] == "{{ aws_region }}c"'
-
- # ============================================================
-
- # Update AZ's
-
- - name: Update AZ's
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- zones:
- - "{{ aws_region }}a"
- - "{{ aws_region }}b"
- - "{{ aws_region }}c"
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- purge_zones: yes
- register: info
-
- - assert:
- that:
- - 'info.changed'
- - 'info.elb.status == "ok"'
- - '"{{ aws_region }}a" in info.elb.zones'
- - '"{{ aws_region }}b" in info.elb.zones'
- - '"{{ aws_region }}c" in info.elb.zones'
-
-
- # ============================================================
-
- # Purge Listeners
-
- - name: Purge Listeners
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- zones:
- - "{{ aws_region }}a"
- - "{{ aws_region }}b"
- - "{{ aws_region }}c"
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 81
- purge_listeners: yes
- register: info
-
- - assert:
- that:
- - 'info.elb.status == "ok"'
- - 'info.changed'
- - '[80, 81, "HTTP", "HTTP"] in info.elb.listeners'
- - 'info.elb.listeners|length == 1'
-
-
-
- # ============================================================
-
- # add Listeners
-
- - name: Add Listeners
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- zones:
- - "{{ aws_region }}a"
- - "{{ aws_region }}b"
- - "{{ aws_region }}c"
- listeners:
- - protocol: http
- load_balancer_port: 8081
- instance_port: 8081
- purge_listeners: no
- register: info
-
- - assert:
- that:
- - 'info.elb.status == "ok"'
- - 'info.changed'
- - '[80, 81, "HTTP", "HTTP"] in info.elb.listeners'
- - '[8081, 8081, "HTTP", "HTTP"] in info.elb.listeners'
- - 'info.elb.listeners|length == 2'
-
-
- # ============================================================
-
- - name: test with no name
- ec2_elb_lb:
- state: present
- register: result
- ignore_errors: true
-
- - name: assert failure when called with no parameters
- assert:
- that:
- - 'result.failed'
- - 'result.msg == "missing required arguments: name"'
-
-
- # ============================================================
- - name: test with only name (state missing)
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- register: result
- ignore_errors: true
-
- - name: assert failure when called with only name
- assert:
- that:
- - 'result.failed'
- - 'result.msg == "missing required arguments: state"'
-
-
- # ============================================================
- - name: test invalid region parameter
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- region: 'asdf querty 1234'
- zones:
- - "{{ aws_region }}a"
- - "{{ aws_region }}b"
- - "{{ aws_region }}c"
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- register: result
- ignore_errors: true
-
- - name: assert invalid region parameter
- assert:
- that:
- - 'result.failed'
- - '"Region asdf querty 1234 does not seem to be available" in result.msg'
-
-
- # ============================================================
- - name: test no authentication parameters
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- aws_access_key: '{{ omit }}'
- aws_secret_key: '{{ omit }}'
- security_token: '{{ omit }}'
- zones:
- - "{{ aws_region }}a"
- - "{{ aws_region }}b"
- - "{{ aws_region }}c"
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 80
- register: result
- ignore_errors: true
-
- - name: assert valid region parameter
- assert:
- that:
- - 'result.failed'
- - '"No handler was ready to authenticate" in result.msg'
-
-
- # ============================================================
- - name: test credentials from environment
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: present
- aws_access_key: "{{ omit }}"
- aws_secret_key: "{{ omit }}"
- security_token: "{{ omit }}"
- zones:
- - "{{ aws_region }}a"
- - "{{ aws_region }}b"
- - "{{ aws_region }}c"
- listeners:
- - protocol: http
- load_balancer_port: 80
- instance_port: 81
- environment:
- EC2_ACCESS_KEY: bogus_access_key
- EC2_SECRET_KEY: bogus_secret_key
- register: result
- ignore_errors: true
-
- - name: assert credentials from environment
- assert:
- that:
- - 'result.failed'
- - '"InvalidClientTokenId" in result.exception'
-
-
- always:
-
- # ============================================================
- - name: remove the test load balancer completely
- ec2_elb_lb:
- name: "{{ tag_prefix }}"
- state: absent
- register: result
-
- - name: assert the load balancer was removed
- assert:
- that:
- - 'result.changed'
- - 'result.elb.name == "{{tag_prefix}}"'
- - 'result.elb.status == "deleted"'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/vars/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/vars/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/vars/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_elb_lb/vars/main.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,2 +0,0 @@
----
-# vars file for test_ec2_elb_lb
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,3 @@
cloud/aws
-shippable/aws/group1
+
ec2_eni_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,6 @@
---
+availability_zone: '{{ ec2_availability_zone_names[0] }}'
+
vpc_seed_a: '{{ resource_prefix }}'
vpc_seed_b: '{{ resource_prefix }}-ec2_eni'
vpc_prefix: '10.{{ 256 | random(seed=vpc_seed_a) }}.{{ 256 | random(seed=vpc_seed_b ) }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+dependencies:
+- role: setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/main.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/main.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/main.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/main.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -7,18 +7,10 @@
region: "{{ aws_region }}"
collections:
+ - ansible.netcommon
- community.aws
block:
- - name: Get available AZs
- aws_az_info:
- filters:
- region-name: "{{ aws_region }}"
- register: az_info
-
- - name: Pick an AZ
- set_fact:
- availability_zone: "{{ az_info['availability_zones'][0]['zone_name'] }}"
# ============================================================
- name: create a VPC
@@ -51,27 +43,18 @@
vpc_id: "{{ vpc_result.vpc.id }}"
register: vpc_sg_result
- - name: Get a list of images
- ec2_ami_info:
- filters:
- owner-alias: amazon
- name: "amzn2-ami-minimal-hvm-*"
- description: "Amazon Linux 2 AMI *"
- register: images_info
-
- name: Set facts to simplify use of extra resources
set_fact:
vpc_id: "{{ vpc_result.vpc.id }}"
vpc_subnet_id: "{{ vpc_subnet_result.subnet.id }}"
vpc_sg_id: "{{ vpc_sg_result.group_id }}"
- image_id: "{{ images_info.images | sort(attribute='creation_date') | reverse | first | json_query('image_id') }}"
# ============================================================
- name: Create 2 instances to test attaching and detaching network interfaces
ec2_instance:
name: "{{ resource_prefix }}-eni-instance-{{ item }}"
- image_id: "{{ image_id }}"
+ image_id: "{{ ec2_ami_id }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
instance_type: t2.micro
wait: false
@@ -113,6 +96,11 @@
include_tasks: ./test_deletion.yaml
always:
+ # ============================================================
+ # Some test problems are caused by "eventual consistency"
+ # describe the ENIs in the account so we can see what's happening
+ - name: Describe ENIs in account
+ ec2_eni_info: {}
# ============================================================
- name: remove the network interfaces
@@ -125,6 +113,7 @@
loop:
- "{{ eni_id_1 | default(omit) }}"
- "{{ eni_id_2 | default(omit) }}"
+ - "{{ eni_id_3 | default(omit) }}"
- name: terminate the instances
ec2_instance:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_attachment.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_attachment.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_attachment.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_attachment.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,28 @@
# ============================================================
+# If we don't stop the instances they can get stuck "detaching"
+- name: Ensure test instances are stopped
+ ec2_instance:
+ state: stopped
+ instance_ids:
+ - "{{ instance_id_1 }}"
+ - "{{ instance_id_2 }}"
+ wait: True
+
+- name: attach the network interface to instance 1 (check mode)
+ ec2_eni:
+ instance_id: "{{ instance_id_1 }}"
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ state: present
+ attached: True
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: attach the network interface to instance 1
ec2_eni:
instance_id: "{{ instance_id_1 }}"
@@ -75,6 +99,21 @@
_interface_0: '{{ eni_info.network_interfaces[0] }}'
# ============================================================
+- name: test attaching the network interface to a different instance (check mode)
+ ec2_eni:
+ instance_id: "{{ instance_id_2 }}"
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ state: present
+ attached: True
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: test attaching the network interface to a different instance
ec2_eni:
instance_id: "{{ instance_id_2 }}"
@@ -100,6 +139,21 @@
_interface_0: '{{ eni_info.network_interfaces[0] }}'
# ============================================================
+- name: detach the network interface (check mode)
+ ec2_eni:
+ instance_id: "{{ instance_id_2 }}"
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ state: present
+ attached: False
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: detach the network interface
ec2_eni:
instance_id: "{{ instance_id_2 }}"
@@ -166,6 +220,26 @@
- '"currently in use" in result.msg'
# ============================================================
+- name: Ensure test instances is running (will block non-forced detachment)
+ ec2_instance:
+ state: running
+ instance_ids:
+ - "{{ instance_id_2 }}"
+ wait: True
+
+- name: delete an attached network interface with force_detach (check mode)
+ ec2_eni:
+ force_detach: True
+ eni_id: "{{ eni_id_1 }}"
+ state: absent
+ check_mode: true
+ register: result_check_mode
+ ignore_errors: True
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: delete an attached network interface with force_detach
ec2_eni:
force_detach: True
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_deletion.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_deletion.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_deletion.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_deletion.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -1,5 +1,18 @@
---
# ============================================================
+- name: test deleting the unattached network interface by using the ID (check mode)
+ ec2_eni:
+ eni_id: "{{ eni_id_1 }}"
+ name: "{{ resource_prefix }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: absent
+ check_mode: True
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: test deleting the unattached network interface by using the ID
ec2_eni:
eni_id: "{{ eni_id_1 }}"
@@ -16,7 +29,20 @@
- result.changed
- result.interface is undefined
- '"network_interfaces" in eni_info'
- - eni_id_1 not in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
+ - eni_id_1 not in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
+
+- name: test removing the network interface by ID is idempotent (check mode)
+ ec2_eni:
+ eni_id: "{{ eni_id_1 }}"
+ name: "{{ resource_prefix }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: absent
+ check_mode: True
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
- name: test removing the network interface by ID is idempotent
ec2_eni:
@@ -53,7 +79,7 @@
- result.changed
- result.interface is undefined
- '"network_interfaces" in eni_info'
- - eni_id_2 not in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
+ - eni_id_2 not in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
- name: test removing the network interface by name is idempotent
ec2_eni:
@@ -88,5 +114,5 @@
assert:
that:
- '"network_interfaces" in eni_info'
- - eni_id_1 not in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
- - eni_id_2 not in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
+ - eni_id_1 not in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
+ - eni_id_2 not in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_eni_basic_creation.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_eni_basic_creation.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_eni_basic_creation.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_eni_basic_creation.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -1,5 +1,18 @@
---
# ============================================================
+- name: create a network interface (check mode)
+ ec2_eni:
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: present
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: create a network interface
ec2_eni:
device_index: 1
@@ -57,11 +70,11 @@
- _interface_0.private_dns_name is string
- _interface_0.private_dns_name.endswith("ec2.internal")
- '"private_ip_address" in _interface_0'
- - _interface_0.private_ip_address | ipaddr()
+ - _interface_0.private_ip_address | ansible.netcommon.ipaddr
- _interface_0.private_ip_address == ip_1
- '"private_ip_addresses" in _interface_0'
- _interface_0.private_ip_addresses | length == 1
- - ip_1 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_1 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
- '"requester_id" in _interface_0'
- _interface_0.requester_id is string
- '"requester_managed" in _interface_0'
@@ -77,6 +90,19 @@
- '"vpc_id" in _interface_0'
- _interface_0.vpc_id == vpc_id
+- name: test idempotence by using the same private_ip_address (check mode)
+ ec2_eni:
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: present
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test idempotence by using the same private_ip_address
ec2_eni:
device_index: 1
@@ -152,11 +178,11 @@
- _interface_0.private_dns_name is string
- _interface_0.private_dns_name.endswith("ec2.internal")
- '"private_ip_address" in _interface_0'
- - _interface_0.private_ip_address | ipaddr()
+ - _interface_0.private_ip_address | ansible.netcommon.ipaddr
- _interface_0.private_ip_address == ip_5
- '"private_ip_addresses" in _interface_0'
- _interface_0.private_ip_addresses | length == 1
- - ip_5 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_5 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
- '"requester_id" in _interface_0'
- _interface_0.requester_id is string
- '"requester_managed" in _interface_0'
@@ -181,8 +207,8 @@
that:
- '"network_interfaces" in eni_info'
- eni_info.network_interfaces | length >= 2
- - eni_id_1 in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
- - eni_id_2 in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
+ - eni_id_1 in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
+ - eni_id_2 in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
# ============================================================
# Run some VPC filter based tests of ec2_eni_info
@@ -199,8 +225,8 @@
that:
- '"network_interfaces" in eni_info'
- eni_info.network_interfaces | length == 2
- - eni_id_1 in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
- - eni_id_2 in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
+ - eni_id_1 in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
+ - eni_id_2 in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
- name: Fetch ENI info with VPC filters - VPC
ec2_eni_info:
@@ -213,7 +239,25 @@
that:
- '"network_interfaces" in eni_info'
- eni_info.network_interfaces | length == 4
- - eni_id_1 in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
- - eni_id_2 in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
- - ec2_ips[0] in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
- - ec2_ips[1] in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - eni_id_1 in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
+ - eni_id_2 in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
+ - ec2_ips[0] in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
+ - ec2_ips[1] in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
+
+
+# =========================================================
+
+- name: create another network interface without private_ip_address
+ ec2_eni:
+ device_index: 1
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: present
+ register: result_no_private_ip
+
+- assert:
+ that:
+ - result_no_private_ip.changed
+
+- name: save the third network interface ID for cleanup
+ set_fact:
+ eni_id_3: "{{ result_no_private_ip.interface.id }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_ipaddress_assign.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_ipaddress_assign.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_ipaddress_assign.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_ipaddress_assign.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -1,5 +1,19 @@
---
# ============================================================
+- name: add two implicit secondary IPs (check mode)
+ ec2_eni:
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: present
+ secondary_private_ip_address_count: 2
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: add two implicit secondary IPs
ec2_eni:
device_index: 1
@@ -18,10 +32,24 @@
- result.interface.id == eni_id_1
- result.interface.private_ip_addresses | length == 3
- _interface_0.private_ip_addresses | length == 3
- - ip_1 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_1 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
+- name: test idempotence with two implicit secondary IPs (check mode)
+ ec2_eni:
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: present
+ secondary_private_ip_address_count: 2
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test idempotence with two implicit secondary IPs
ec2_eni:
device_index: 1
@@ -40,7 +68,7 @@
- result.interface.id == eni_id_1
- result.interface.private_ip_addresses | length == 3
- _interface_0.private_ip_addresses | length == 3
- - ip_1 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_1 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
@@ -64,7 +92,7 @@
- result.interface.id == eni_id_1
- result.interface.private_ip_addresses | length == 3
- _interface_0.private_ip_addresses | length == 3
- - ip_1 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_1 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
@@ -106,9 +134,24 @@
- new_secondary_ip in _private_ips
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
- _private_ips: '{{ eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list }}'
+ _private_ips: "{{ eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list }}"
# ============================================================
+- name: remove secondary address (check mode)
+ ec2_eni:
+ purge_secondary_private_ip_addresses: true
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: present
+ secondary_private_ip_addresses: []
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: remove secondary address
ec2_eni:
purge_secondary_private_ip_addresses: true
@@ -128,10 +171,25 @@
- result.interface.id == eni_id_1
- result.interface.private_ip_addresses | length == 1
- _interface_0.private_ip_addresses | length == 1
- - ip_1 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_1 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
+- name: test idempotent behavior purging secondary addresses (check mode)
+ ec2_eni:
+ purge_secondary_private_ip_addresses: true
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_id }}"
+ state: present
+ secondary_private_ip_addresses: []
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test idempotent behavior purging secondary addresses
ec2_eni:
purge_secondary_private_ip_addresses: true
@@ -152,7 +210,7 @@
- result.interface.private_ip_addresses | length == 1
- result.interface.private_ip_addresses | length == 1
- _interface_0.private_ip_addresses | length == 1
- - ip_1 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_1 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
@@ -177,8 +235,8 @@
- result.interface.id == eni_id_2
- result.interface.private_ip_addresses | length == 2
- _interface_0.private_ip_addresses | length == 2
- - ip_5 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
- - ip_4 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_5 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
+ - ip_4 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
@@ -262,6 +320,6 @@
that:
- result.changed
- _interface_0.private_ip_addresses | length == 1
- - ip_1 in ( eni_info | community.general.json_query("network_interfaces[].private_ip_addresses[].private_ip_address") | list )
+ - ip_1 in ( eni_info.network_interfaces | map(attribute='private_ip_addresses') | flatten | map(attribute='private_ip_address') | list )
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_delete_on_termination.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_delete_on_termination.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_delete_on_termination.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_delete_on_termination.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -23,6 +23,22 @@
# ============================================================
+- name: enable delete_on_termination (check mode)
+ ec2_eni:
+ instance_id: "{{ instance_id_2 }}"
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ state: present
+ attached: True
+ delete_on_termination: True
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: enable delete_on_termination
ec2_eni:
instance_id: "{{ instance_id_2 }}"
@@ -45,6 +61,22 @@
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
+- name: test idempotent behavior enabling delete_on_termination (check mode)
+ ec2_eni:
+ instance_id: "{{ instance_id_2 }}"
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ state: present
+ attached: True
+ delete_on_termination: True
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test idempotent behavior enabling delete_on_termination
ec2_eni:
instance_id: "{{ instance_id_2 }}"
@@ -63,6 +95,22 @@
# ============================================================
+- name: disable delete_on_termination (check mode)
+ ec2_eni:
+ instance_id: "{{ instance_id_2 }}"
+ device_index: 1
+ private_ip_address: "{{ ip_1 }}"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ state: present
+ attached: True
+ delete_on_termination: False
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: disable delete_on_termination
ec2_eni:
instance_id: "{{ instance_id_2 }}"
@@ -149,8 +197,8 @@
- not result.changed
- '"network_interfaces" in eni_info'
- eni_info.network_interfaces | length >= 1
- - eni_id_1 not in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
- - eni_id_2 in ( eni_info | community.general.json_query("network_interfaces[].id") | list )
+ - eni_id_1 not in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
+ - eni_id_2 in ( eni_info.network_interfaces | selectattr('id') | map(attribute='id') | list )
# ============================================================
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_source_dest_check.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_source_dest_check.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_source_dest_check.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_source_dest_check.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,16 @@
# ============================================================
+- name: test source_dest_check defaults to true (check mode)
+ ec2_eni:
+ eni_id: "{{ eni_id_1 }}"
+ source_dest_check: true
+ state: present
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test source_dest_check defaults to true
ec2_eni:
eni_id: "{{ eni_id_1 }}"
@@ -36,6 +48,18 @@
vars:
_interface_0: '{{ eni_info.network_interfaces[0] }}'
+- name: test idempotence disabling source_dest_check (check mode)
+ ec2_eni:
+ eni_id: "{{ eni_id_1 }}"
+ source_dest_check: false
+ state: present
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test idempotence disabling source_dest_check
ec2_eni:
eni_id: "{{ eni_id_1 }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_tags.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_tags.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_tags.yaml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_eni/tasks/test_modifying_tags.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -13,6 +13,20 @@
- result.interface.name is undefined
# ============================================================
+- name: add tags to the network interface (check mode)
+ ec2_eni:
+ eni_id: "{{ eni_id_1 }}"
+ state: present
+ name: "{{ resource_prefix }}"
+ tags:
+ CreatedBy: "{{ resource_prefix }}"
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - result_check_mode.changed
+
- name: add tags to the network interface
ec2_eni:
eni_id: "{{ eni_id_1 }}"
@@ -44,6 +58,18 @@
_interface_0: '{{ eni_info.network_interfaces[0] }}'
# ============================================================
+- name: test idempotence by using the Name tag and the subnet (check mode)
+ ec2_eni:
+ name: "{{ resource_prefix }}"
+ state: present
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test idempotence by using the Name tag and the subnet
ec2_eni:
name: "{{ resource_prefix }}"
@@ -57,6 +83,18 @@
- result.interface.id == eni_id_1
# ============================================================
+- name: test tags are not purged if tags are null even if name is provided (check mode)
+ ec2_eni:
+ name: "{{ resource_prefix }}"
+ state: present
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ check_mode: true
+ register: result_check_mode
+
+- assert:
+ that:
+ - not result_check_mode.changed
+
- name: test tags are not purged if tags are null even if name is provided
ec2_eni:
name: "{{ resource_prefix }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,9 @@
+# reason: broken
+# Tests frequently failing
+# https://github.com/ansible-collections/amazon.aws/issues/440
+disabled
+
+slow
+
cloud/aws
-shippable/aws/group2
ec2_group_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/credential_tests.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/credential_tests.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/credential_tests.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/credential_tests.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,161 +0,0 @@
----
-# A Note about ec2 environment variable name preference:
-# - EC2_URL -> AWS_URL
-# - EC2_ACCESS_KEY -> AWS_ACCESS_KEY_ID -> AWS_ACCESS_KEY
-# - EC2_SECRET_KEY -> AWS_SECRET_ACCESS_KEY -> AWX_SECRET_KEY
-# - EC2_REGION -> AWS_REGION
-#
-
-# - include: ../../setup_ec2/tasks/common.yml module_name: ec2_group
-
-- block:
- # ============================================================
- - name: test failure with no parameters
- ec2_group:
- register: result
- ignore_errors: true
-
- - name: assert failure with no parameters
- assert:
- that:
- - 'result.failed'
- - 'result.msg == "one of the following is required: name, group_id"'
-
- # ============================================================
- - name: test failure with only name
- ec2_group:
- name: '{{ec2_group_name}}'
- register: result
- ignore_errors: true
-
- - name: assert failure with only name
- assert:
- that:
- - 'result.failed'
- - 'result.msg == "Must provide description when state is present."'
-
- # ============================================================
- - name: test failure with only description
- ec2_group:
- description: '{{ec2_group_description}}'
- register: result
- ignore_errors: true
-
- - name: assert failure with only description
- assert:
- that:
- - 'result.failed'
- - 'result.msg == "one of the following is required: name, group_id"'
-
- # ============================================================
- - name: test failure with empty description (AWS API requires non-empty string desc)
- ec2_group:
- name: '{{ec2_group_name}}'
- description: ''
- region: '{{ec2_region}}'
- register: result
- ignore_errors: true
-
- - name: assert failure with empty description
- assert:
- that:
- - 'result.failed'
- - 'result.msg == "Must provide description when state is present."'
-
- # ============================================================
- - name: test valid region parameter
- ec2_group:
- name: '{{ec2_group_name}}'
- description: '{{ec2_group_description}}'
- region: '{{ec2_region}}'
- register: result
- ignore_errors: true
-
- - name: assert valid region parameter
- assert:
- that:
- - 'result.failed'
- - '"Unable to locate credentials" in result.msg'
-
- # ============================================================
- - name: test environment variable EC2_REGION
- ec2_group:
- name: '{{ec2_group_name}}'
- description: '{{ec2_group_description}}'
- environment:
- EC2_REGION: '{{ec2_region}}'
- register: result
- ignore_errors: true
-
- - name: assert environment variable EC2_REGION
- assert:
- that:
- - 'result.failed'
- - '"Unable to locate credentials" in result.msg'
-
- # ============================================================
- - name: test invalid ec2_url parameter
- ec2_group:
- name: '{{ec2_group_name}}'
- description: '{{ec2_group_description}}'
- environment:
- EC2_URL: bogus.example.com
- register: result
- ignore_errors: true
-
- - name: assert invalid ec2_url parameter
- assert:
- that:
- - 'result.failed'
- - 'result.msg.startswith("The ec2_group module requires a region")'
-
- # ============================================================
- - name: test valid ec2_url parameter
- ec2_group:
- name: '{{ec2_group_name}}'
- description: '{{ec2_group_description}}'
- environment:
- EC2_URL: '{{ec2_url}}'
- register: result
- ignore_errors: true
-
- - name: assert valid ec2_url parameter
- assert:
- that:
- - 'result.failed'
- - 'result.msg.startswith("The ec2_group module requires a region")'
-
- # ============================================================
- - name: test credentials from environment
- ec2_group:
- name: '{{ec2_group_name}}'
- description: '{{ec2_group_description}}'
- environment:
- EC2_REGION: '{{ec2_region}}'
- EC2_ACCESS_KEY: bogus_access_key
- EC2_SECRET_KEY: bogus_secret_key
- register: result
- ignore_errors: true
-
- - name: assert ec2_group with valid ec2_url
- assert:
- that:
- - 'result.failed'
- - '"validate the provided access credentials" in result.msg'
-
- # ============================================================
- - name: test credential parameters
- ec2_group:
- name: '{{ec2_group_name}}'
- description: '{{ec2_group_description}}'
- ec2_region: '{{ec2_region}}'
- ec2_access_key: 'bogus_access_key'
- ec2_secret_key: 'bogus_secret_key'
- register: result
- ignore_errors: true
-
- - name: assert credential parameters
- assert:
- that:
- - 'result.failed'
- - '"validate the provided access credentials" in result.msg'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_group/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,6 +1,4 @@
---
-# Runs a set of tets without the AWS connection credentials configured
-- include: ./credential_tests.yml
- set_fact:
aws_security_token: '{{ security_token | default("") }}'
no_log: True
@@ -1065,19 +1063,9 @@
register: result
- name: assert that rule descriptions are created (expected changed=true)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is defined
-
- - name: if an older version of botocore is installed changes should still have changed due to purged rules (expected changed=true)
- assert:
- that:
- - 'result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is undefined
# =========================================================================================
- name: add rules without descriptions ready for adding descriptions to existing rules
@@ -1126,21 +1114,11 @@
register: result
- name: assert that rule descriptions are created (expected changed=true)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'result.changed'
- 'result.ip_permissions[0].ipv6_ranges[0].description == "ipv6 rule desc 1"'
- 'result.ip_permissions_egress[0].ip_ranges[0].description == "egress rule desc 1"'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is defined
-
- - name: if an older version of botocore is installed changes should still have changed due to purged rules (expected changed=true)
- assert:
- that:
- - 'result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is undefined
# ============================================================
- name: test modifying rule and egress rule descriptions (expected changed=true) (CHECK MODE)
@@ -1167,20 +1145,10 @@
register: result
- name: assert that rule descriptions were modified (expected changed=true)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'result.ip_permissions | length > 0'
- 'result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is defined
-
- - name: if an older version of botocore is installed everything should stay the same (expected changed=false)
- assert:
- that:
- - 'not result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is undefined and result.ip_permissions_egress[1].ip_ranges[0].description is undefined
# ============================================================
- name: test modifying rule and egress rule descriptions (expected changed=true)
@@ -1206,21 +1174,11 @@
register: result
- name: assert that rule descriptions were modified (expected changed=true)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'result.changed'
- 'result.ip_permissions[0].ipv6_ranges[0].description == "ipv6 rule desc 2"'
- 'result.ip_permissions_egress[0].ip_ranges[0].description == "egress rule desc 2"'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is defined
-
- - name: if an older version of botocore is installed everything should stay the same (expected changed=false)
- assert:
- that:
- - 'not result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is undefined
# ============================================================
@@ -1245,9 +1203,6 @@
register: result
- name: assert that rule descriptions were modified (expected changed=true)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'result.changed'
@@ -1278,19 +1233,9 @@
register: result
- name: assert that rule descriptions stayed the same (expected changed=false)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'not result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is defined
-
- - name: if an older version of botocore is installed everything should stay the same (expected changed=false)
- assert:
- that:
- - 'not result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is undefined
# ============================================================
- name: test that keeping the same rule descriptions (expected changed=false)
@@ -1316,21 +1261,11 @@
register: result
- name: assert that rule descriptions stayed the same (expected changed=false)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'not result.changed'
- 'result.ip_permissions[0].ipv6_ranges[0].description == "ipv6 rule desc 2"'
- 'result.ip_permissions_egress[0].ip_ranges[0].description == "egress rule desc 2"'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is defined
-
- - name: if an older version of botocore is installed everything should stay the same (expected changed=false)
- assert:
- that:
- - 'not result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is undefined
# ============================================================
- name: test removing rule descriptions (expected changed=true) (CHECK MODE)
@@ -1357,19 +1292,9 @@
register: result
- name: assert that rule descriptions were removed (expected changed=true)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
assert:
that:
- 'result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is defined
-
- - name: if an older version of botocore is installed everything should stay the same (expected changed=false)
- assert:
- that:
- - 'not result.changed'
- when: result.ip_permissions_egress[0].ip_ranges[0].description is undefined
# ============================================================
- name: test removing rule descriptions (expected changed=true)
@@ -1395,21 +1320,11 @@
register: result
ignore_errors: true
- - name: assert that rule descriptions were removed (expected changed=true with newer botocore)
- # Only assert this if rule description is defined as the botocore version may < 1.7.2.
- # It's still helpful to have these tests run on older versions since it verifies backwards
- # compatibility with this feature.
+ - name: assert that rule descriptions were removed
assert:
that:
- 'result.ip_permissions[0].ipv6_ranges[0].description is undefined'
- 'result.ip_permissions_egress[0].ip_ranges[0].description is undefined'
- when: result is changed
-
- - name: if an older version of botocore is installed everything should stay the same (expected changed=false)
- assert:
- that:
- - 'not result.changed'
- when: result.failed
# ============================================================
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,5 @@
+# duration: 25
+slow
+
+cloud/aws
+ec2_instance_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/inventory ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/inventory
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/inventory 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/inventory 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,19 @@
+[tests]
+instance_minimal
+checkmode_tests
+termination_protection
+ebs_optimized
+block_devices
+cpu_options
+metadata_options
+default_vpc_tests
+external_resource_attach
+instance_no_wait
+iam_instance_role
+tags_and_vpc_settings
+security_group
+state_config_updates
+
+[all:vars]
+ansible_connection=local
+ansible_python_interpreter="{{ ansible_playbook_python }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,40 @@
+---
+# Beware: most of our tests here are run in parallel.
+# To add new tests you'll need to add a new host to the inventory and a matching
+# '{{ inventory_hostname }}'.yml file in roles/ec2_instance/tasks/
+
+
+# Prepare the VPC and figure out which AMI to use
+- hosts: all
+ gather_facts: no
+ tasks:
+ - module_defaults:
+ group/aws:
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ security_token: "{{ security_token | default(omit) }}"
+ region: "{{ aws_region }}"
+ vars:
+ # We can't just use "run_once" because the facts don't propagate when
+ # running an 'include' that was run_once
+ setup_run_once: yes
+ block:
+ - include_role:
+ name: 'ec2_instance'
+ tasks_from: env_setup.yml
+ rescue:
+ - include_role:
+ name: 'ec2_instance'
+ tasks_from: env_cleanup.yml
+ run_once: yes
+ - fail:
+ msg: 'Environment preparation failed'
+ run_once: yes
+
+# VPC should get cleaned up once all hosts have run
+- hosts: all
+ gather_facts: no
+ strategy: free
+ serial: 5
+ roles:
+ - ec2_instance
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,5 @@
+# this just makes sure they're in the right place
+dependencies:
+- role: setup_botocore_pip
+- role: prepare_tests
+- role: setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,21 @@
+---
+# defaults file for ec2_instance
+ec2_instance_owner: 'integration-run-{{ resource_prefix }}'
+ec2_instance_type: 't3.micro'
+ec2_instance_tag_TestId: '{{ resource_prefix }}-{{ inventory_hostname }}'
+
+vpc_name: '{{ resource_prefix }}-vpc'
+vpc_seed: '{{ resource_prefix }}'
+vpc_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.0.0/16'
+
+subnet_a_az: '{{ ec2_availability_zone_names[0] }}'
+subnet_a_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.32.0/24'
+subnet_a_startswith: '10.{{ 256 | random(seed=vpc_seed) }}.32.'
+subnet_b_az: '{{ ec2_availability_zone_names[1] }}'
+subnet_b_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.33.0/24'
+subnet_b_startswith: '10.{{ 256 | random(seed=vpc_seed) }}.33.'
+
+first_iam_role: "ansible-test-sts-{{ resource_prefix | hash('md5') }}-test-policy"
+second_iam_role: "ansible-test-sts-{{ resource_prefix | hash('md5') }}-test-policy-2"
+# Zuul resource prefixes are very long, and IAM roles can only be 64 characters
+unique_id: "{{ resource_prefix | hash('md5') }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/files/assume-role-policy.json ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/files/assume-role-policy.json
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/files/assume-role-policy.json 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/files/assume-role-policy.json 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,13 @@
+{
+ "Version": "2008-10-17",
+ "Statement": [
+ {
+ "Sid": "",
+ "Effect": "Allow",
+ "Principal": {
+ "Service": "ec2.amazonaws.com"
+ },
+ "Action": "sts:AssumeRole"
+ }
+ ]
+}
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,5 @@
+dependencies:
+- role: prepare_tests
+- role: setup_ec2_facts
+collections:
+- amazon.aws
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/block_devices.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/block_devices.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/block_devices.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/block_devices.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,121 @@
+- block:
+ - include_role:
+ name: setup_botocore_pip
+ vars:
+ botocore_version: '1.19.27'
+
+ - name: "New instance with an extra block device"
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-test-ebs-vols"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ volumes:
+ - device_name: /dev/sdb
+ ebs:
+ volume_size: 20
+ delete_on_termination: true
+ volume_type: standard
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ instance_type: "{{ ec2_instance_type }}"
+ wait: true
+ register: block_device_instances
+
+ - name: "Gather instance info"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-ebs-vols"
+ register: block_device_instances_info
+
+ - assert:
+ that:
+ - block_device_instances is not failed
+ - block_device_instances is changed
+ - block_device_instances_info.instances[0].block_device_mappings[0]
+ - block_device_instances_info.instances[0].block_device_mappings[1]
+ - block_device_instances_info.instances[0].block_device_mappings[1].device_name == '/dev/sdb'
+
+ - name: "New instance with an extra block device (check mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-ebs-vols-checkmode"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ volumes:
+ - device_name: /dev/sdb
+ ebs:
+ volume_size: 20
+ delete_on_termination: true
+ volume_type: standard
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ instance_type: "{{ ec2_instance_type }}"
+ check_mode: yes
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-ebs-vols"
+ register: presented_instance_fact
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-ebs-vols-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm instance was created without check mode"
+ assert:
+ that:
+ - "{{ presented_instance_fact.instances | length }} > 0"
+
+ - name: "Confirm instance was not created with check mode"
+ assert:
+ that:
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ - name: "Terminate instances"
+ ec2_instance:
+ state: absent
+ instance_ids: "{{ block_device_instances.instance_ids }}"
+
+ - name: "New instance with an extra block device - gp3 volume_type and throughput"
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-test-ebs-vols-gp3"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ volumes:
+ - device_name: /dev/sdb
+ ebs:
+ volume_size: 20
+ delete_on_termination: true
+ volume_type: gp3
+ throughput: 500
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ instance_type: "{{ ec2_instance_type }}"
+ wait: true
+ register: block_device_instances_gp3
+ # Managing Troughput requires botocore >= 1.19.27
+ vars:
+ ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"
+
+ - assert:
+ that:
+ - block_device_instances_gp3 is not failed
+ - block_device_instances_gp3 is changed
+ - block_device_instances_gp3.spec.BlockDeviceMappings[0].DeviceName == '/dev/sdb'
+ - block_device_instances_gp3.spec.BlockDeviceMappings[0].Ebs.VolumeType == 'gp3'
+ - block_device_instances_gp3.spec.BlockDeviceMappings[0].Ebs.VolumeSize == 20
+ - block_device_instances_gp3.spec.BlockDeviceMappings[0].Ebs.Throughput == 500
+
+ always:
+ - name: "Terminate block_devices instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/checkmode_tests.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/checkmode_tests.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/checkmode_tests.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/checkmode_tests.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,202 @@
+- block:
+ - name: "Make basic instance"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ image_id: "{{ ec2_ami_id }}"
+ security_groups: "{{ sg.group_id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ wait: false
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+ register: basic_instance
+
+ - name: "Make basic instance (check mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-checkmode-comparison-checkmode"
+ image_id: "{{ ec2_ami_id }}"
+ security_groups: "{{ sg.group_id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+ check_mode: yes
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: presented_instance_fact
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm whether the check mode is working normally."
+ assert:
+ that:
+ - "{{ presented_instance_fact.instances | length }} > 0"
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ - name: "Stop instance (check mode)"
+ ec2_instance:
+ state: stopped
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+ check_mode: yes
+
+ - name: "fact ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: confirm_checkmode_stopinstance_fact
+
+ - name: "Verify that it was not stopped."
+ assert:
+ that:
+ - confirm_checkmode_stopinstance_fact.instances[0].state.name not in ["stopped", "stopping"]
+
+ - name: "Stop instance."
+ ec2_instance:
+ state: stopped
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+ wait: true
+ register: instance_stop
+
+ - name: "fact stopped ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: confirm_stopinstance_fact
+
+ - name: "Verify that it was stopped."
+ assert:
+ that:
+ - confirm_stopinstance_fact.instances[0].state.name in ["stopped", "stopping"]
+
+ - name: "Running instance in check mode."
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+ check_mode: yes
+
+ - name: "fact ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: confirm_checkmode_runninginstance_fact
+
+ - name: "Verify that it was not running."
+ assert:
+ that:
+ - '"{{ confirm_checkmode_runninginstance_fact.instances[0].state.name }}" != "running"'
+
+ - name: "Running instance."
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+
+ - name: "fact ec2 instance."
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: confirm_runninginstance_fact
+
+ - name: "Verify that it was running."
+ assert:
+ that:
+ - '"{{ confirm_runninginstance_fact.instances[0].state.name }}" == "running"'
+
+ - name: "Tag instance."
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Other Value"
+ check_mode: yes
+
+ - name: "fact ec2 instance."
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: confirm_not_tagged
+
+ - name: "Verify that it hasn't been re-tagged."
+ assert:
+ that:
+ - '"{{ confirm_not_tagged.instances[0].tags.TestTag }}" == "Some Value"'
+
+ - name: "Terminate instance in check mode."
+ ec2_instance:
+ state: absent
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+ wait: True
+ check_mode: yes
+
+ - name: "fact ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: confirm_checkmode_terminatedinstance_fact
+
+ - name: "Verify that it was not terminated,"
+ assert:
+ that:
+ - '"{{ confirm_checkmode_terminatedinstance_fact.instances[0].state.name }}" != "terminated"'
+
+ - name: "Terminate instance."
+ ec2_instance:
+ state: absent
+ name: "{{ resource_prefix }}-checkmode-comparison"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ TestTag: "Some Value"
+ wait: True
+
+ - name: "fact ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-checkmode-comparison"
+ register: confirm_terminatedinstance_fact
+
+ - name: "Verify that it was terminated,"
+ assert:
+ that:
+ - '"{{ confirm_terminatedinstance_fact.instances[0].state.name }}" == "terminated"'
+
+ always:
+ - name: "Terminate checkmode instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/cpu_options.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/cpu_options.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/cpu_options.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/cpu_options.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,88 @@
+- block:
+ - name: "create t3.nano instance with cpu_options"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-t3nano-1-threads-per-core"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ instance_type: t3.nano
+ cpu_options:
+ core_count: 1
+ threads_per_core: 1
+ wait: true
+ register: instance_creation
+
+ - name: "instance with cpu_options created with the right options"
+ assert:
+ that:
+ - instance_creation is success
+ - instance_creation is changed
+
+ - name: "modify cpu_options on existing instance (warning displayed)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-t3nano-1-threads-per-core"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ instance_type: t3.nano
+ cpu_options:
+ core_count: 1
+ threads_per_core: 2
+ wait: true
+ register: cpu_options_update
+ ignore_errors: yes
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-t3nano-1-threads-per-core"
+ register: presented_instance_fact
+
+ - name: "modify cpu_options has no effect on existing instance"
+ assert:
+ that:
+ - cpu_options_update is success
+ - cpu_options_update is not changed
+ - "{{ presented_instance_fact.instances | length }} > 0"
+ - "'{{ presented_instance_fact.instances.0.state.name }}' in ['running','pending']"
+ - "{{ presented_instance_fact.instances.0.cpu_options.core_count }} == 1"
+ - "{{ presented_instance_fact.instances.0.cpu_options.threads_per_core }} == 1"
+
+ - name: "create t3.nano instance with cpu_options(check mode)"
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-test-t3nano-1-threads-per-core-checkmode"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ instance_type: t3.nano
+ cpu_options:
+ core_count: 1
+ threads_per_core: 1
+ wait: true
+ check_mode: yes
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-t3nano-1-threads-per-core-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm existence of instance id."
+ assert:
+ that:
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ always:
+ - name: "Terminate cpu_options instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/default_vpc_tests.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/default_vpc_tests.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/default_vpc_tests.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/default_vpc_tests.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,57 @@
+- block:
+ - name: "Make instance in a default subnet of the VPC"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-default-vpc"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_group: "default"
+ instance_type: "{{ ec2_instance_type }}"
+ wait: false
+ register: in_default_vpc
+
+ - name: "Make instance in a default subnet of the VPC(check mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-default-vpc-checkmode"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_group: "default"
+ instance_type: "{{ ec2_instance_type }}"
+ check_mode: yes
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-default-vpc"
+ register: presented_instance_fact
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-default-vpc-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm whether the check mode is working normally."
+ assert:
+ that:
+ - "{{ presented_instance_fact.instances | length }} > 0"
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ - name: "Terminate instances"
+ ec2_instance:
+ state: absent
+ instance_ids: "{{ in_default_vpc.instance_ids }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+
+ always:
+ - name: "Terminate vpc_tests instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/ebs_optimized.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/ebs_optimized.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/ebs_optimized.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/ebs_optimized.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,41 @@
+- block:
+ - name: "Make EBS optimized instance in the testing subnet of the test VPC"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-ebs-optimized-instance-in-vpc"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ ebs_optimized: true
+ instance_type: t3.nano
+ wait: false
+ register: ebs_opt_in_vpc
+
+ - name: "Get ec2 instance info"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-ebs-optimized-instance-in-vpc"
+ register: ebs_opt_instance_info
+
+ - name: "Assert instance is ebs_optimized"
+ assert:
+ that:
+ - "{{ ebs_opt_instance_info.instances.0.ebs_optimized }}"
+
+ - name: "Terminate instances"
+ ec2_instance:
+ state: absent
+ instance_ids: "{{ ebs_opt_in_vpc.instance_ids }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+
+ always:
+ - name: "Terminate ebs_optimzed instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_cleanup.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_cleanup.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_cleanup.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_cleanup.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,79 @@
+- name: "remove Instances"
+ ec2_instance:
+ state: absent
+ filters:
+ vpc-id: "{{ testing_vpc.vpc.id }}"
+ wait: yes
+ ignore_errors: yes
+ retries: 10
+
+- name: "remove ENIs"
+ ec2_eni_info:
+ filters:
+ vpc-id: "{{ testing_vpc.vpc.id }}"
+ register: enis
+
+- name: "delete all ENIs"
+ ec2_eni:
+ state: absent
+ eni_id: "{{ item.id }}"
+ register: removed
+ until: removed is not failed
+ with_items: "{{ enis.network_interfaces }}"
+ ignore_errors: yes
+ retries: 10
+
+- name: "remove the security group"
+ ec2_group:
+ state: absent
+ name: "{{ resource_prefix }}-sg"
+ description: a security group for ansible tests
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ register: removed
+ until: removed is not failed
+ ignore_errors: yes
+ retries: 10
+
+- name: "remove the second security group"
+ ec2_group:
+ name: "{{ resource_prefix }}-sg-2"
+ description: a security group for ansible tests
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ state: absent
+ register: removed
+ until: removed is not failed
+ ignore_errors: yes
+ retries: 10
+
+- name: "remove subnet A"
+ ec2_vpc_subnet:
+ state: absent
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ cidr: "{{ subnet_a_cidr }}"
+ register: removed
+ until: removed is not failed
+ ignore_errors: yes
+ retries: 10
+
+- name: "remove subnet B"
+ ec2_vpc_subnet:
+ state: absent
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ cidr: "{{ subnet_b_cidr }}"
+ register: removed
+ until: removed is not failed
+ ignore_errors: yes
+ retries: 10
+
+- name: "remove the VPC"
+ ec2_vpc_net:
+ state: absent
+ name: "{{ vpc_name }}"
+ cidr_block: "{{ vpc_cidr }}"
+ tags:
+ Name: Ansible Testing VPC
+ tenancy: default
+ register: removed
+ until: removed is not failed
+ ignore_errors: yes
+ retries: 10
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_setup.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_setup.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_setup.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/env_setup.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,64 @@
+- run_once: '{{ setup_run_once | default("no") | bool }}'
+ block:
+ - name: "Create VPC for use in testing"
+ ec2_vpc_net:
+ state: present
+ name: "{{ vpc_name }}"
+ cidr_block: "{{ vpc_cidr }}"
+ tags:
+ Name: Ansible ec2_instance Testing VPC
+ tenancy: default
+ register: testing_vpc
+
+ - name: "Create default subnet in zone A"
+ ec2_vpc_subnet:
+ state: present
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ cidr: "{{ subnet_a_cidr }}"
+ az: "{{ subnet_a_az }}"
+ resource_tags:
+ Name: "{{ resource_prefix }}-subnet-a"
+ register: testing_subnet_a
+
+ - name: "Create secondary subnet in zone B"
+ ec2_vpc_subnet:
+ state: present
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ cidr: "{{ subnet_b_cidr }}"
+ az: "{{ subnet_b_az }}"
+ resource_tags:
+ Name: "{{ resource_prefix }}-subnet-b"
+ register: testing_subnet_b
+
+ - name: "create a security group with the vpc"
+ ec2_group:
+ state: present
+ name: "{{ resource_prefix }}-sg"
+ description: a security group for ansible tests
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ rules:
+ - proto: tcp
+ from_port: 22
+ to_port: 22
+ cidr_ip: 0.0.0.0/0
+ - proto: tcp
+ from_port: 80
+ to_port: 80
+ cidr_ip: 0.0.0.0/0
+ register: sg
+
+ - name: "create secondary security group with the vpc"
+ ec2_group:
+ name: "{{ resource_prefix }}-sg-2"
+ description: a secondary security group for ansible tests
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ rules:
+ - proto: tcp
+ from_port: 22
+ to_port: 22
+ cidr_ip: 0.0.0.0/0
+ - proto: tcp
+ from_port: 80
+ to_port: 80
+ cidr_ip: 0.0.0.0/0
+ register: sg2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/external_resource_attach.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/external_resource_attach.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/external_resource_attach.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/external_resource_attach.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,129 @@
+- block:
+ # Make custom ENIs and attach via the `network` parameter
+ - ec2_eni:
+ state: present
+ delete_on_termination: true
+ subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ security_groups:
+ - "{{ sg.group_id }}"
+ register: eni_a
+
+ - ec2_eni:
+ state: present
+ delete_on_termination: true
+ subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ security_groups:
+ - "{{ sg.group_id }}"
+ register: eni_b
+
+ - ec2_eni:
+ state: present
+ delete_on_termination: true
+ subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ security_groups:
+ - "{{ sg.group_id }}"
+ register: eni_c
+
+ - ec2_key:
+ name: "{{ resource_prefix }}_test_key"
+
+ - name: "Make instance in the testing subnet created in the test VPC"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-eni-vpc"
+ key_name: "{{ resource_prefix }}_test_key"
+ network:
+ interfaces:
+ - id: "{{ eni_a.interface.id }}"
+ image_id: "{{ ec2_ami_id }}"
+ availability_zone: '{{ subnet_b_az }}'
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ instance_type: "{{ ec2_instance_type }}"
+ wait: false
+ register: in_test_vpc
+
+ - name: "Gather {{ resource_prefix }}-test-eni-vpc info"
+ ec2_instance_info:
+ filters:
+ "tag:Name": '{{ resource_prefix }}-test-eni-vpc'
+ register: in_test_vpc_instance
+
+ - assert:
+ that:
+ - 'in_test_vpc_instance.instances.0.key_name == "{{ resource_prefix }}_test_key"'
+ - '(in_test_vpc_instance.instances.0.network_interfaces | length) == 1'
+
+ - name: "Add a second interface"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-eni-vpc"
+ network:
+ interfaces:
+ - id: "{{ eni_a.interface.id }}"
+ - id: "{{ eni_b.interface.id }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ instance_type: "{{ ec2_instance_type }}"
+ wait: false
+ register: add_interface
+ until: add_interface is not failed
+ ignore_errors: yes
+ retries: 10
+
+ - name: "Make instance in the testing subnet created in the test VPC(check mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-eni-vpc-checkmode"
+ key_name: "{{ resource_prefix }}_test_key"
+ network:
+ interfaces:
+ - id: "{{ eni_c.interface.id }}"
+ image_id: "{{ ec2_ami_id }}"
+ availability_zone: '{{ subnet_b_az }}'
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ instance_type: "{{ ec2_instance_type }}"
+ check_mode: yes
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-eni-vpc"
+ register: presented_instance_fact
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-eni-vpc-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm existence of instance id."
+ assert:
+ that:
+ - "{{ presented_instance_fact.instances | length }} > 0"
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ always:
+ - name: "Terminate external_resource_attach instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
+
+ - ec2_key:
+ state: absent
+ name: "{{ resource_prefix }}_test_key"
+ ignore_errors: yes
+
+ - ec2_eni:
+ state: absent
+ eni_id: '{{ item.interface.id }}'
+ ignore_errors: yes
+ with_items:
+ - '{{ eni_a }}'
+ - '{{ eni_b }}'
+ - '{{ eni_c }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/find_ami.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/find_ami.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/find_ami.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/find_ami.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,15 @@
+- run_once: '{{ setup_run_once | default("no") | bool }}'
+ block:
+ - name: "Find AMI to use"
+ run_once: yes
+ ec2_ami_info:
+ owners: 'amazon'
+ filters:
+ name: '{{ ec2_ami_name }}'
+ register: ec2_amis
+ - name: "Set fact with latest AMI"
+ run_once: yes
+ vars:
+ latest_ami: '{{ ec2_amis.images | sort(attribute="creation_date") | last }}'
+ set_fact:
+ ec2_ami_image: '{{ latest_ami.image_id }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/iam_instance_role.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/iam_instance_role.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/iam_instance_role.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/iam_instance_role.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,127 @@
+- block:
+ - name: "Create IAM role for test"
+ iam_role:
+ state: present
+ name: '{{ first_iam_role }}'
+ assume_role_policy_document: "{{ lookup('file','assume-role-policy.json') }}"
+ create_instance_profile: yes
+ managed_policy:
+ - AmazonEC2ContainerServiceRole
+ register: iam_role
+
+ - name: "Create second IAM role for test"
+ iam_role:
+ state: present
+ name: '{{ second_iam_role }}'
+ assume_role_policy_document: "{{ lookup('file','assume-role-policy.json') }}"
+ create_instance_profile: yes
+ managed_policy:
+ - AmazonEC2ContainerServiceRole
+ register: iam_role_2
+
+ - name: "wait 10 seconds for roles to become available"
+ wait_for:
+ timeout: 10
+ delegate_to: localhost
+
+ - name: "Make instance with an instance_role"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-instance-role"
+ image_id: "{{ ec2_ami_id }}"
+ security_groups: "{{ sg.group_id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ instance_role: "ansible-test-sts-{{ unique_id }}-test-policy"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ register: instance_with_role
+
+ - assert:
+ that:
+ - 'instance_with_role.instances[0].iam_instance_profile.arn == iam_role.arn.replace(":role/", ":instance-profile/")'
+
+ - name: "Make instance with an instance_role(check mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-instance-role-checkmode"
+ image_id: "{{ ec2_ami_id }}"
+ security_groups: "{{ sg.group_id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ instance_role: "{{ iam_role.arn.replace(':role/', ':instance-profile/') }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ check_mode: yes
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-instance-role"
+ register: presented_instance_fact
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-instance-role-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm whether the check mode is working normally."
+ assert:
+ that:
+ - "{{ presented_instance_fact.instances | length }} > 0"
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ - name: "Update instance with new instance_role"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-instance-role"
+ image_id: "{{ ec2_ami_id }}"
+ security_groups: "{{ sg.group_id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ instance_role: "{{ iam_role_2.arn.replace(':role/', ':instance-profile/') }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ register: instance_with_updated_role
+
+ - name: "wait 10 seconds for role update to complete"
+ wait_for:
+ timeout: 10
+ delegate_to: localhost
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-instance-role"
+ register: updates_instance_info
+
+ - assert:
+ that:
+ - 'updates_instance_info.instances[0].iam_instance_profile.arn == iam_role_2.arn.replace(":role/", ":instance-profile/")'
+ - 'updates_instance_info.instances[0].instance_id == instance_with_role.instances[0].instance_id'
+
+ always:
+ - name: "Terminate iam_instance_role instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
+
+ - name: "Delete IAM role for test"
+ iam_role:
+ state: absent
+ name: "{{ item }}"
+ assume_role_policy_document: "{{ lookup('file','assume-role-policy.json') }}"
+ create_instance_profile: yes
+ managed_policy:
+ - AmazonEC2ContainerServiceRole
+ loop:
+ - '{{ first_iam_role }}'
+ - '{{ second_iam_role }}'
+ register: removed
+ until: removed is not failed
+ ignore_errors: yes
+ retries: 10
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_minimal.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_minimal.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_minimal.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_minimal.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,489 @@
+- block:
+
+ - name: "Create a new instance (check_mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance
+ check_mode: true
+
+ - assert:
+ that:
+ - create_instance is not failed
+ - create_instance is changed
+ - '"instance_ids" not in create_instance'
+ - '"ec2:RunInstances" not in create_instance.resource_actions'
+
+ - name: "Create a new instance"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance
+
+ - assert:
+ that:
+ - create_instance is not failed
+ - create_instance is changed
+ - '"ec2:RunInstances" in create_instance.resource_actions'
+ - '"instance_ids" in create_instance'
+ - create_instance.instance_ids | length == 1
+ - create_instance.instance_ids[0].startswith("i-")
+
+ - name: "Save instance ID"
+ set_fact:
+ create_instance_id_1: "{{ create_instance.instance_ids[0] }}"
+
+ - name: "Create a new instance - idempotency (check_mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance
+ check_mode: true
+
+ - assert:
+ that:
+ - create_instance is not failed
+ - create_instance is not changed
+ - '"ec2:RunInstances" not in create_instance.resource_actions'
+ - '"instance_ids" in create_instance'
+ - create_instance.instance_ids | length == 1
+ - create_instance.instance_ids[0] == create_instance_id_1
+
+ - name: "Create a new instance - idempotency"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance
+
+ - assert:
+ that:
+ - create_instance is not failed
+ - create_instance is not changed
+ - '"ec2:RunInstances" not in create_instance.resource_actions'
+ - '"instance_ids" in create_instance'
+ - create_instance.instance_ids | length == 1
+ - create_instance.instance_ids[0] == create_instance_id_1
+
+################################################################
+
+ - name: "Create a new instance with a different name (check_mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-2"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_2
+ check_mode: true
+
+ - assert:
+ that:
+ - create_instance_2 is not failed
+ - create_instance_2 is changed
+ - '"instance_ids" not in create_instance_2'
+ - '"ec2:RunInstances" not in create_instance_2.resource_actions'
+
+ - name: "Create a new instance with a different name"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-2"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_2
+
+ - assert:
+ that:
+ - create_instance_2 is not failed
+ - create_instance_2 is changed
+ - '"ec2:RunInstances" in create_instance_2.resource_actions'
+ - '"instance_ids" in create_instance_2'
+ - create_instance_2.instance_ids | length == 1
+ - create_instance_2.instance_ids[0].startswith("i-")
+ - create_instance_2.instance_ids[0] != create_instance_id_1
+
+ - name: "Save instance ID"
+ set_fact:
+ create_instance_id_2: "{{ create_instance_2.instance_ids[0] }}"
+
+ - name: "Create a new instance with a different name - idempotency (check_mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-2"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_2
+ check_mode: true
+
+ - assert:
+ that:
+ - create_instance_2 is not failed
+ - create_instance_2 is not changed
+ - '"ec2:RunInstances" not in create_instance_2.resource_actions'
+ - '"instance_ids" in create_instance_2'
+ - create_instance_2.instance_ids | length == 1
+ - create_instance_2.instance_ids[0] == create_instance_id_2
+
+ - name: "Create a new instance with a different name - idempotency"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-2"
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_2
+
+ - assert:
+ that:
+ - create_instance_2 is not failed
+ - create_instance_2 is not changed
+ - '"ec2:RunInstances" not in create_instance_2.resource_actions'
+ - '"instance_ids" in create_instance_2'
+ - create_instance_2.instance_ids | length == 1
+ - create_instance_2.instance_ids[0] == create_instance_id_2
+
+################################################################
+
+ - name: "Create a new instance with a different name in tags (check_mode)"
+ ec2_instance:
+ state: present
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_tag
+ check_mode: true
+
+ - assert:
+ that:
+ - create_instance_tag is not failed
+ - create_instance_tag is changed
+ - '"instance_ids" not in create_instance_tag'
+ - '"ec2:RunInstances" not in create_instance_tag.resource_actions'
+
+ - name: "Create a new instance with a different name in tags"
+ ec2_instance:
+ state: present
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_tag
+
+ - assert:
+ that:
+ - create_instance_tag is not failed
+ - create_instance_tag is changed
+ - '"ec2:RunInstances" in create_instance_tag.resource_actions'
+ - '"instance_ids" in create_instance_tag'
+ - create_instance_tag.instance_ids | length == 1
+ - create_instance_tag.instance_ids[0].startswith("i-")
+ - create_instance_tag.instance_ids[0] != create_instance_id_1
+ - create_instance_tag.instance_ids[0] != create_instance_id_2
+
+ - name: "Save instance ID"
+ set_fact:
+ create_instance_id_tag: "{{ create_instance_tag.instance_ids[0] }}"
+
+ - name: "Create a new instance with a different name in tags - idempotency (check_mode)"
+ ec2_instance:
+ state: present
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_tag
+ check_mode: true
+
+ - assert:
+ that:
+ - create_instance_tag is not failed
+ - create_instance_tag is not changed
+ - '"ec2:RunInstances" not in create_instance_tag.resource_actions'
+ - '"instance_ids" in create_instance_tag'
+ - create_instance_tag.instance_ids | length == 1
+ - create_instance_tag.instance_ids[0] == create_instance_id_tag
+
+ - name: "Create a new instance with a different name in tags - idempotency"
+ ec2_instance:
+ state: present
+ instance_type: "{{ ec2_instance_type }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: true
+ register: create_instance_tag
+
+ - assert:
+ that:
+ - create_instance_tag is not failed
+ - create_instance_tag is not changed
+ - '"ec2:RunInstances" not in create_instance_tag.resource_actions'
+ - '"instance_ids" in create_instance_tag'
+ - create_instance_tag.instance_ids | length == 1
+ - create_instance_tag.instance_ids[0] == create_instance_id_tag
+
+################################################################
+
+ - name: "Terminate instance based on name parameter (check_mode)"
+ ec2_instance:
+ state: absent
+ name: "{{ resource_prefix }}-test-basic"
+ wait: true
+ register: terminate_name
+ check_mode: true
+
+ - assert:
+ that:
+ - terminate_name is not failed
+ - terminate_name is changed
+ - '"ec2:TerminateInstances" not in terminate_name.resource_actions'
+ - '"terminate_failed" in terminate_name'
+ - '"terminate_success" in terminate_name'
+ - terminate_name.terminate_failed | length == 0
+ - terminate_name.terminate_success | length == 1
+ - terminate_name.terminate_success[0] == create_instance_id_1
+
+ - name: "Terminate instance based on name parameter"
+ ec2_instance:
+ state: absent
+ name: "{{ resource_prefix }}-test-basic"
+ wait: true
+ register: terminate_name
+
+ - assert:
+ that:
+ - terminate_name is not failed
+ - terminate_name is changed
+ - '"ec2:TerminateInstances" in terminate_name.resource_actions'
+ - '"terminate_failed" in terminate_name'
+ - '"terminate_success" in terminate_name'
+ - terminate_name.terminate_failed | length == 0
+ - terminate_name.terminate_success | length == 1
+ - terminate_name.terminate_success[0] == create_instance_id_1
+
+ - name: "Terminate instance based on name parameter - idempotency (check_mode)"
+ ec2_instance:
+ state: absent
+ name: "{{ resource_prefix }}-test-basic"
+ wait: true
+ register: terminate_name
+ check_mode: true
+
+ - assert:
+ that:
+ - terminate_name is not failed
+ - terminate_name is not changed
+ - '"ec2:TerminateInstances" not in terminate_name.resource_actions'
+ - '"terminate_failed" not in terminate_name'
+ - '"terminate_success" not in terminate_name'
+
+ - name: "Terminate instance based on name parameter - idempotency"
+ ec2_instance:
+ state: absent
+ name: "{{ resource_prefix }}-test-basic"
+ wait: true
+ register: terminate_name
+
+ - assert:
+ that:
+ - terminate_name is not failed
+ - terminate_name is not changed
+ - '"ec2:TerminateInstances" not in terminate_name.resource_actions'
+ - '"terminate_failed" not in terminate_name'
+ - '"terminate_success" not in terminate_name'
+
+################################################################
+
+ - name: "Terminate instance based on name tag (check_mode)"
+ ec2_instance:
+ state: absent
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ wait: true
+ register: terminate_tag
+ check_mode: true
+
+ - assert:
+ that:
+ - terminate_tag is not failed
+ - terminate_tag is changed
+ - '"ec2:TerminateInstances" not in terminate_tag.resource_actions'
+ - '"terminate_failed" in terminate_tag'
+ - '"terminate_success" in terminate_tag'
+ - terminate_tag.terminate_failed | length == 0
+ - terminate_tag.terminate_success | length == 1
+ - terminate_tag.terminate_success[0] == create_instance_id_tag
+
+ - name: "Terminate instance based on name tag"
+ ec2_instance:
+ state: absent
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ wait: true
+ register: terminate_tag
+
+ - assert:
+ that:
+ - terminate_tag is not failed
+ - terminate_tag is changed
+ - '"ec2:TerminateInstances" in terminate_tag.resource_actions'
+ - '"terminate_failed" in terminate_tag'
+ - '"terminate_success" in terminate_tag'
+ - terminate_tag.terminate_failed | length == 0
+ - terminate_tag.terminate_success | length == 1
+ - terminate_tag.terminate_success[0] == create_instance_id_tag
+
+ - name: "Terminate instance based on name tag - idempotency (check_mode)"
+ ec2_instance:
+ state: absent
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ wait: true
+ register: terminate_tag
+ check_mode: true
+
+ - assert:
+ that:
+ - terminate_tag is not failed
+ - terminate_tag is not changed
+ - '"ec2:TerminateInstances" not in terminate_tag.resource_actions'
+ - '"terminate_failed" not in terminate_tag'
+ - '"terminate_success" not in terminate_tag'
+
+ - name: "Terminate instance based on name tag - idempotency"
+ ec2_instance:
+ state: absent
+ tags:
+ Name: "{{ resource_prefix }}-test-basic-tag"
+ wait: true
+ register: terminate_tag
+
+ - assert:
+ that:
+ - terminate_tag is not failed
+ - terminate_tag is not changed
+ - '"ec2:TerminateInstances" not in terminate_tag.resource_actions'
+ - '"terminate_failed" not in terminate_tag'
+ - '"terminate_success" not in terminate_tag'
+
+################################################################
+
+ - name: "Terminate instance based on id (check_mode)"
+ ec2_instance:
+ state: absent
+ instance_ids:
+ - "{{ create_instance_id_2 }}"
+ wait: true
+ register: terminate_id
+ check_mode: true
+
+ - assert:
+ that:
+ - terminate_id is not failed
+ - terminate_id is changed
+ - '"ec2:TerminateInstances" not in terminate_id.resource_actions'
+ - '"terminate_failed" in terminate_id'
+ - '"terminate_success" in terminate_id'
+ - terminate_id.terminate_failed | length == 0
+ - terminate_id.terminate_success | length == 1
+ - terminate_id.terminate_success[0] == create_instance_id_2
+
+ - name: "Terminate instance based on id"
+ ec2_instance:
+ state: absent
+ instance_ids:
+ - "{{ create_instance_id_2 }}"
+ wait: true
+ register: terminate_id
+
+ - assert:
+ that:
+ - terminate_id is not failed
+ - terminate_id is changed
+ - '"ec2:TerminateInstances" in terminate_id.resource_actions'
+ - '"terminate_failed" in terminate_id'
+ - '"terminate_success" in terminate_id'
+ - terminate_id.terminate_failed | length == 0
+ - terminate_id.terminate_success | length == 1
+ - terminate_id.terminate_success[0] == create_instance_id_2
+
+ - name: "Terminate instance based on id - idempotency (check_mode)"
+ ec2_instance:
+ state: absent
+ instance_ids:
+ - "{{ create_instance_id_2 }}"
+ wait: true
+ register: terminate_id
+ check_mode: true
+
+ - assert:
+ that:
+ - terminate_id is not failed
+ - terminate_id is not changed
+ - '"ec2:TerminateInstances" not in terminate_id.resource_actions'
+ - '"terminate_failed" not in terminate_id'
+ - '"terminate_success" not in terminate_id'
+
+ - name: "Terminate instance based on id - idempotency"
+ ec2_instance:
+ state: absent
+ instance_ids:
+ - "{{ create_instance_id_2 }}"
+ wait: true
+ register: terminate_id
+
+ - assert:
+ that:
+ - terminate_id is not failed
+ - terminate_id is not changed
+ - '"ec2:TerminateInstances" not in terminate_id.resource_actions'
+ - '"terminate_failed" not in terminate_id'
+ - '"terminate_success" not in terminate_id'
+
+################################################################
+
+ always:
+ - name: "Terminate instance_no_wait instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_no_wait.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_no_wait.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_no_wait.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/instance_no_wait.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,68 @@
+- block:
+ - name: "New instance and don't wait for it to complete"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-no-wait"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: false
+ instance_type: "{{ ec2_instance_type }}"
+ register: in_test_vpc
+
+ - assert:
+ that:
+ - in_test_vpc is not failed
+ - in_test_vpc is changed
+ - in_test_vpc.instances is not defined
+ - in_test_vpc.instance_ids is defined
+ - in_test_vpc.instance_ids | length > 0
+
+ - name: "New instance and don't wait for it to complete ( check mode )"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-no-wait-checkmode"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ wait: false
+ instance_type: "{{ ec2_instance_type }}"
+ check_mode: yes
+
+ - name: "Facts for ec2 test instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-no-wait"
+ register: real_instance_fact
+ until: real_instance_fact.instances | length > 0
+ retries: 10
+
+ - name: "Facts for checkmode ec2 test instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-no-wait-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm whether the check mode is working normally."
+ assert:
+ that:
+ - "{{ real_instance_fact.instances | length }} > 0"
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ - name: "Terminate instances"
+ ec2_instance:
+ state: absent
+ instance_ids: "{{ in_test_vpc.instance_ids }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+
+ always:
+ - name: "Terminate instance_no_wait instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,55 @@
+---
+# Beware: most of our tests here are run in parallel.
+# To add new tests you'll need to add a new host to the inventory and a matching
+# '{{ inventory_hostname }}'.yml file in roles/ec2_instance/tasks/
+#
+# Please make sure you tag your instances with
+# tags:
+# "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+# And delete them based off that tag at the end of your specific set of tests
+#
+# ###############################################################################
+#
+# A Note about ec2 environment variable name preference:
+# - EC2_URL -> AWS_URL
+# - EC2_ACCESS_KEY -> AWS_ACCESS_KEY_ID -> AWS_ACCESS_KEY
+# - EC2_SECRET_KEY -> AWS_SECRET_ACCESS_KEY -> AWX_SECRET_KEY
+# - EC2_REGION -> AWS_REGION
+#
+
+- name: "Wrap up all tests and setup AWS credentials"
+ module_defaults:
+ group/aws:
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ security_token: "{{ security_token | default(omit) }}"
+ region: "{{ aws_region }}"
+ aws_config:
+ retries:
+ # Unfortunately AWSRetry doesn't support paginators and boto3's paginators
+ # don't support any configuration of the delay between retries.
+ max_attempts: 20
+ collections:
+ - community.aws
+ block:
+ - debug:
+ msg: "{{ inventory_hostname }} start: {{ lookup('pipe','date') }}"
+ - include_tasks: '{{ inventory_hostname }}.yml'
+ - debug:
+ msg: "{{ inventory_hostname }} finish: {{ lookup('pipe','date') }}"
+
+ always:
+ - set_fact:
+ _role_complete: True
+ - vars:
+ completed_hosts: '{{ ansible_play_hosts_all | map("extract", hostvars, "_role_complete") | list | select("defined") | list | length }}'
+ hosts_in_play: '{{ ansible_play_hosts_all | length }}'
+ debug:
+ msg: "{{ completed_hosts }} of {{ hosts_in_play }} complete"
+ - include_tasks: env_cleanup.yml
+ vars:
+ completed_hosts: '{{ ansible_play_hosts_all | map("extract", hostvars, "_role_complete") | list | select("defined") | list | length }}'
+ hosts_in_play: '{{ ansible_play_hosts_all | length }}'
+ when:
+ - aws_cleanup
+ - completed_hosts == hosts_in_play
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/metadata_options.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/metadata_options.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/metadata_options.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/metadata_options.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,64 @@
+- block:
+ - name: "create t3.nano instance with metadata_options"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-t3nano-enabled-required"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ instance_type: t3.nano
+ metadata_options:
+ http_endpoint: enabled
+ http_tokens: required
+ wait: false
+ register: instance_creation
+
+ - name: "instance with metadata_options created with the right options"
+ assert:
+ that:
+ - instance_creation is success
+ - instance_creation is changed
+ - "'{{ instance_creation.spec.MetadataOptions.HttpEndpoint }}' == 'enabled'"
+ - "'{{ instance_creation.spec.MetadataOptions.HttpTokens }}' == 'required'"
+
+ - name: "modify metadata_options on existing instance"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-t3nano-enabled-required"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ instance_type: t3.nano
+ metadata_options:
+ http_endpoint: enabled
+ http_tokens: optional
+ wait: false
+ register: metadata_options_update
+ ignore_errors: yes
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-t3nano-enabled-required"
+ register: presented_instance_fact
+
+ - name: "modify metadata_options has no effect on existing instance"
+ assert:
+ that:
+ - metadata_options_update is success
+ - metadata_options_update is not changed
+ - "{{ presented_instance_fact.instances | length }} > 0"
+ - "'{{ presented_instance_fact.instances.0.state.name }}' in ['running','pending']"
+ - "'{{ presented_instance_fact.instances.0.metadata_options.http_endpoint }}' == 'enabled'"
+ - "'{{ presented_instance_fact.instances.0.metadata_options.http_tokens }}' == 'required'"
+
+ always:
+ - name: "Terminate metadata_options instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/security_group.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/security_group.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/security_group.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/security_group.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,81 @@
+- block:
+ - name: "New instance with 2 security groups"
+ ec2_instance:
+ name: "{{ resource_prefix }}-test-security-groups"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ instance_type: t2.micro
+ wait: false
+ security_groups:
+ - "{{ sg.group_id }}"
+ - "{{ sg2.group_id }}"
+ register: security_groups_test
+
+ - name: "Recreate same instance with 2 security groups ( Idempotency )"
+ ec2_instance:
+ name: "{{ resource_prefix }}-test-security-groups"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ instance_type: t2.micro
+ wait: false
+ security_groups:
+ - "{{ sg.group_id }}"
+ - "{{ sg2.group_id }}"
+ register: security_groups_test_idempotency
+
+ - name: "Gather ec2 facts to check SGs have been added"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-security-groups"
+ "instance-state-name": "running"
+ register: dual_sg_instance_facts
+ until: dual_sg_instance_facts.instances | length > 0
+ retries: 10
+
+ - name: "Remove secondary security group from instance"
+ ec2_instance:
+ name: "{{ resource_prefix }}-test-security-groups"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ instance_type: t2.micro
+ security_groups:
+ - "{{ sg.group_id }}"
+ register: remove_secondary_security_group
+
+ - name: "Gather ec2 facts to check seconday SG has been removed"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-security-groups"
+ "instance-state-name": "running"
+ register: single_sg_instance_facts
+ until: single_sg_instance_facts.instances | length > 0
+ retries: 10
+
+ - name: "Add secondary security group to instance"
+ ec2_instance:
+ name: "{{ resource_prefix }}-test-security-groups"
+ image_id: "{{ ec2_ami_id }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ instance_type: t2.micro
+ security_groups:
+ - "{{ sg.group_id }}"
+ - "{{ sg2.group_id }}"
+ register: add_secondary_security_group
+
+ - assert:
+ that:
+ - security_groups_test is not failed
+ - security_groups_test is changed
+ - security_groups_test_idempotency is not changed
+ - remove_secondary_security_group is changed
+ - single_sg_instance_facts.instances.0.security_groups | length == 1
+ - dual_sg_instance_facts.instances.0.security_groups | length == 2
+ - add_secondary_security_group is changed
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/state_config_updates.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/state_config_updates.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/state_config_updates.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/state_config_updates.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,146 @@
+# Test that configuration changes, like security groups and instance attributes,
+# are updated correctly when the instance has different states, and also when
+# changing the state of an instance.
+# https://github.com/ansible-collections/community.aws/issues/16
+- block:
+ - name: "Make instance with sg and termination protection enabled"
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-test-state-param-changes"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: False
+ instance_type: "{{ ec2_instance_type }}"
+ wait: True
+ register: create_result
+
+ - assert:
+ that:
+ - create_result is not failed
+ - create_result.changed
+ - '"instances" in create_result'
+ - '"instance_ids" in create_result'
+ - '"spec" in create_result'
+ - create_result.instances[0].security_groups[0].group_id == "{{ sg.group_id }}"
+ - create_result.spec.DisableApiTermination == False
+
+ - name: "Change sg and termination protection while instance is in state running"
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-test-state-param-changes"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_groups: "{{ sg2.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: True
+ instance_type: "{{ ec2_instance_type }}"
+ register: change_params_result
+
+ - assert:
+ that:
+ - change_params_result is not failed
+ - change_params_result.changed
+ - '"instances" in change_params_result'
+ - '"instance_ids" in change_params_result'
+ - '"changes" in change_params_result'
+ - change_params_result.instances[0].security_groups[0].group_id == "{{ sg2.group_id }}"
+ - change_params_result.changes[0].DisableApiTermination.Value == True
+ - change_params_result.changes[1].Groups[0] == "{{ sg2.group_id }}" # TODO fix this to be less fragile
+
+
+ - name: "Change instance state from running to stopped, and change sg and termination protection"
+ ec2_instance:
+ state: stopped
+ name: "{{ resource_prefix }}-test-state-param-changes"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: False
+ instance_type: "{{ ec2_instance_type }}"
+ register: change_state_params_result
+
+ - assert:
+ that:
+ - change_state_params_result is not failed
+ - change_state_params_result.changed
+ - '"instances" in change_state_params_result'
+ - '"instance_ids" in change_state_params_result'
+ - '"changes" in change_state_params_result'
+ - '"stop_success" in change_state_params_result'
+ - '"stop_failed" in change_state_params_result'
+ - change_state_params_result.instances[0].security_groups[0].group_id == "{{ sg.group_id }}"
+ - change_state_params_result.changes[0].DisableApiTermination.Value == False
+
+ - name: "Change sg and termination protection while instance is in state stopped"
+ ec2_instance:
+ state: stopped
+ name: "{{ resource_prefix }}-test-state-param-changes"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_groups: "{{ sg2.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: True
+ instance_type: "{{ ec2_instance_type }}"
+ register: change_params_stopped_result
+
+ - assert:
+ that:
+ - change_params_stopped_result is not failed
+ - change_params_stopped_result.changed
+ - '"instances" in change_params_stopped_result'
+ - '"instance_ids" in change_params_stopped_result'
+ - '"changes" in change_params_stopped_result'
+ - change_params_stopped_result.instances[0].security_groups[0].group_id == "{{ sg2.group_id }}"
+ - change_params_stopped_result.changes[0].DisableApiTermination.Value == True
+
+ - name: "Change instance state from stopped to running, and change sg and termination protection"
+ ec2_instance:
+ state: running
+ name: "{{ resource_prefix }}-test-state-param-changes"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: False
+ instance_type: "{{ ec2_instance_type }}"
+ wait: True
+ register: change_params_start_result
+
+ - assert:
+ that:
+ - change_params_start_result is not failed
+ - change_params_start_result.changed
+ - '"instances" in change_params_start_result'
+ - '"instance_ids" in change_params_start_result'
+ - '"changes" in change_params_start_result'
+ - '"start_success" in change_params_start_result'
+ - '"start_failed" in change_params_start_result'
+ - change_params_start_result.instances[0].security_groups[0].group_id == "{{ sg.group_id }}"
+ - change_params_start_result.changes[0].DisableApiTermination.Value == False
+
+ always:
+
+ - name: Set termination protection to false (so we can terminate instance) (cleanup)
+ ec2_instance:
+ filters:
+ tag:TestId: "{{ ec2_instance_tag_TestId }}"
+ termination_protection: False
+ ignore_errors: yes
+
+ - name: Terminate instance
+ ec2_instance:
+ filters:
+ tag:TestId: "{{ ec2_instance_tag_TestId }}"
+ state: absent
+ wait: False
+ ignore_errors: yes
+
+ - include_tasks: env_cleanup.yml
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/tags_and_vpc_settings.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/tags_and_vpc_settings.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/tags_and_vpc_settings.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/tags_and_vpc_settings.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,158 @@
+- block:
+ - name: "Make instance in the testing subnet created in the test VPC"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-vpc-create"
+ image_id: "{{ ec2_ami_id }}"
+ user_data: |
+ #cloud-config
+ package_upgrade: true
+ package_update: true
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ Something: else
+ security_groups: "{{ sg.group_id }}"
+ network:
+ source_dest_check: false
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ wait: false
+ register: in_test_vpc
+
+ - name: "Make instance in the testing subnet created in the test VPC(check mode)"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-vpc-create-checkmode"
+ image_id: "{{ ec2_ami_id }}"
+ user_data: |
+ #cloud-config
+ package_upgrade: true
+ package_update: true
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ Something: else
+ security_groups: "{{ sg.group_id }}"
+ network:
+ source_dest_check: false
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ check_mode: yes
+
+ - name: "Try to re-make the instance, hopefully this shows changed=False"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-vpc-create"
+ image_id: "{{ ec2_ami_id }}"
+ user_data: |
+ #cloud-config
+ package_upgrade: true
+ package_update: true
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ Something: else
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ register: remake_in_test_vpc
+ - name: "Remaking the same instance resulted in no changes"
+ assert:
+ that: not remake_in_test_vpc.changed
+ - name: "check that instance IDs match anyway"
+ assert:
+ that: 'remake_in_test_vpc.instance_ids[0] == in_test_vpc.instance_ids[0]'
+ - name: "check that source_dest_check was set to false"
+ assert:
+ that: 'not remake_in_test_vpc.instances[0].source_dest_check'
+
+ - name: "fact presented ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-basic-vpc-create"
+ register: presented_instance_fact
+
+ - name: "fact checkmode ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-basic-vpc-create-checkmode"
+ register: checkmode_instance_fact
+
+ - name: "Confirm whether the check mode is working normally."
+ assert:
+ that:
+ - "{{ presented_instance_fact.instances | length }} > 0"
+ - "{{ checkmode_instance_fact.instances | length }} == 0"
+
+ - name: "Alter it by adding tags"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-vpc-create"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ Another: thing
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ instance_type: "{{ ec2_instance_type }}"
+ register: add_another_tag
+
+ - ec2_instance_info:
+ instance_ids: "{{ add_another_tag.instance_ids }}"
+ register: check_tags
+ - name: "Remaking the same instance resulted in no changes"
+ assert:
+ that:
+ - check_tags.instances[0].tags.Another == 'thing'
+ - check_tags.instances[0].tags.Something == 'else'
+
+ - name: "Purge a tag"
+ ec2_instance:
+ state: present
+ name: "{{ resource_prefix }}-test-basic-vpc-create"
+ image_id: "{{ ec2_ami_id }}"
+ purge_tags: true
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ Another: thing
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ instance_type: "{{ ec2_instance_type }}"
+
+ - ec2_instance_info:
+ instance_ids: "{{ add_another_tag.instance_ids }}"
+ register: check_tags
+
+ - name: "Remaking the same instance resulted in no changes"
+ assert:
+ that:
+ - "'Something' not in check_tags.instances[0].tags"
+
+ - name: "check that subnet-default public IP rule was followed"
+ assert:
+ that:
+ - check_tags.instances[0].public_dns_name == ""
+ - check_tags.instances[0].private_ip_address.startswith(subnet_b_startswith)
+ - check_tags.instances[0].subnet_id == testing_subnet_b.subnet.id
+ - name: "check that tags were applied"
+ assert:
+ that:
+ - check_tags.instances[0].tags.Name.startswith(resource_prefix)
+ - "'{{ check_tags.instances[0].state.name }}' in ['pending', 'running']"
+
+ - name: "Terminate instance"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: false
+ register: result
+ - assert:
+ that: result.changed
+
+ always:
+ - name: "Terminate tags_and_vpc_settings instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/termination_protection.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/termination_protection.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/termination_protection.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/termination_protection.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,261 @@
+- block:
+ - name: Create instance with termination protection (check mode)
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: true
+ instance_type: "{{ ec2_instance_type }}"
+ state: running
+ wait: yes
+ check_mode: yes
+ register: create_instance_check_mode_results
+
+ - name: Check the returned value for the earlier task
+ assert:
+ that:
+ - create_instance_check_mode_results is changed
+ - create_instance_check_mode_results.spec.DisableApiTermination == True
+
+ - name: Create instance with termination protection
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: true
+ instance_type: "{{ ec2_instance_type }}"
+ state: running
+ wait: yes
+ register: create_instance_results
+
+ - set_fact:
+ instance_id: '{{ create_instance_results.instances[0].instance_id }}'
+
+ - name: Check return values of the create instance task
+ assert:
+ that:
+ - "{{ create_instance_results.instances | length }} > 0"
+ - "'{{ create_instance_results.instances.0.state.name }}' == 'running'"
+ - "'{{ create_instance_results.spec.DisableApiTermination }}'"
+
+ - name: Get info on termination protection
+ command: 'aws ec2 describe-instance-attribute --attribute disableApiTermination --instance-id {{ instance_id }}'
+ environment:
+ AWS_ACCESS_KEY_ID: "{{ aws_access_key }}"
+ AWS_SECRET_ACCESS_KEY: "{{ aws_secret_key }}"
+ AWS_SESSION_TOKEN: "{{ security_token | default('') }}"
+ AWS_DEFAULT_REGION: "{{ aws_region }}"
+ register: instance_termination_check
+
+ - name: convert it to an object
+ set_fact:
+ instance_termination_status: "{{ instance_termination_check.stdout | from_json }}"
+
+ - name: Assert termination protection status did not change in check_mode
+ assert:
+ that:
+ - instance_termination_status.DisableApiTermination.Value == true
+
+ - name: Create instance with termination protection (check mode) (idempotent)
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: true
+ instance_type: "{{ ec2_instance_type }}"
+ state: running
+ wait: yes
+ check_mode: yes
+ register: create_instance_check_mode_results
+
+ - name: Check the returned value for the earlier task
+ assert:
+ that:
+ - create_instance_check_mode_results is not changed
+
+ - name: Create instance with termination protection (idempotent)
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ security_groups: "{{ sg.group_id }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ termination_protection: true
+ instance_type: "{{ ec2_instance_type }}"
+ state: running
+ wait: yes
+ register: create_instance_results
+
+ - name: Check return values of the create instance task
+ assert:
+ that:
+ - "{{ not create_instance_results.changed }}"
+ - "{{ create_instance_results.instances | length }} > 0"
+
+ - name: Try to terminate the instance (expected to fail)
+ ec2_instance:
+ filters:
+ tag:Name: "{{ resource_prefix }}-termination-protection"
+ state: absent
+ failed_when: "'Unable to terminate instances' not in terminate_instance_results.msg"
+ register: terminate_instance_results
+
+ - name: Set termination protection to false (check_mode)
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ termination_protection: false
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ check_mode: True
+ register: set_termination_protectioncheck_mode_results
+
+ - name: Check return value
+ assert:
+ that:
+ - "{{ set_termination_protectioncheck_mode_results.changed }}"
+
+ - name: Get info on termination protection
+ command: 'aws ec2 describe-instance-attribute --attribute disableApiTermination --instance-id {{ instance_id }}'
+ environment:
+ AWS_ACCESS_KEY_ID: "{{ aws_access_key }}"
+ AWS_SECRET_ACCESS_KEY: "{{ aws_secret_key }}"
+ AWS_SESSION_TOKEN: "{{ security_token | default('') }}"
+ AWS_DEFAULT_REGION: "{{ aws_region }}"
+ register: instance_termination_check
+
+ - name: convert it to an object
+ set_fact:
+ instance_termination_status: "{{ instance_termination_check.stdout | from_json }}"
+
+ - assert:
+ that:
+ - instance_termination_status.DisableApiTermination.Value == true
+
+ - name: Set termination protection to false
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ termination_protection: false
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ register: set_termination_protection_results
+
+ - name: Check return value
+ assert:
+ that:
+ - set_termination_protection_results.changed
+
+ - name: Get info on termination protection
+ command: 'aws ec2 describe-instance-attribute --attribute disableApiTermination --instance-id {{ instance_id }}'
+ environment:
+ AWS_ACCESS_KEY_ID: "{{ aws_access_key }}"
+ AWS_SECRET_ACCESS_KEY: "{{ aws_secret_key }}"
+ AWS_SESSION_TOKEN: "{{ security_token | default('') }}"
+ AWS_DEFAULT_REGION: "{{ aws_region }}"
+ register: instance_termination_check
+
+ - name: convert it to an object
+ set_fact:
+ instance_termination_status: "{{ instance_termination_check.stdout | from_json }}"
+
+ - assert:
+ that:
+ - instance_termination_status.DisableApiTermination.Value == false
+
+ - name: Set termination protection to false (idempotent)
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ termination_protection: false
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ register: set_termination_protection_results
+
+ - name: Check return value
+ assert:
+ that:
+ - "{{ not set_termination_protection_results.changed }}"
+
+ - name: Set termination protection to true
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ termination_protection: true
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ register: set_termination_protection_results
+
+ - name: Check return value
+ assert:
+ that:
+ - "{{ set_termination_protection_results.changed }}"
+ - "{{ set_termination_protection_results.changes[0].DisableApiTermination.Value }}"
+
+ - name: Set termination protection to true (idempotent)
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ termination_protection: true
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ register: set_termination_protection_results
+
+ - name: Check return value
+ assert:
+ that:
+ - "{{ not set_termination_protection_results.changed }}"
+
+ - name: Set termination protection to false (so we can terminate instance)
+ ec2_instance:
+ name: "{{ resource_prefix }}-termination-protection"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ resource_prefix }}"
+ termination_protection: false
+ instance_type: "{{ ec2_instance_type }}"
+ vpc_subnet_id: "{{ testing_subnet_b.subnet.id }}"
+ register: set_termination_protection_results
+
+ - name: Terminate the instance
+ ec2_instance:
+ filters:
+ tag:TestId: "{{ resource_prefix }}"
+ state: absent
+
+ always:
+
+ - name: Set termination protection to false (so we can terminate instance) (cleanup)
+ ec2_instance:
+ filters:
+ tag:TestId: "{{ resource_prefix }}"
+ termination_protection: false
+ ignore_errors: yes
+
+ - name: Terminate instance
+ ec2_instance:
+ filters:
+ tag:TestId: "{{ resource_prefix }}"
+ state: absent
+ wait: false
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/uptime.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/uptime.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/uptime.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/roles/ec2_instance/tasks/uptime.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,66 @@
+---
+- block:
+ - name: "create t3.nano instance"
+ ec2_instance:
+ name: "{{ resource_prefix }}-test-uptime"
+ region: "{{ ec2_region }}"
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ TestId: "{{ ec2_instance_tag_TestId }}"
+ vpc_subnet_id: "{{ testing_subnet_a.subnet.id }}"
+ instance_type: t3.nano
+ wait: yes
+
+ - name: "check ec2 instance"
+ ec2_instance_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}-test-uptime"
+ instance-state-name: [ "running"]
+ register: instance_facts
+
+ - name: "Confirm existence of instance id."
+ assert:
+ that:
+ - "{{ instance_facts.instances | length }} == 1"
+
+ - name: "check using uptime 100 hours - should find nothing"
+ ec2_instance_info:
+ region: "{{ ec2_region }}"
+ uptime: 6000
+ filters:
+ instance-state-name: [ "running"]
+ "tag:Name": "{{ resource_prefix }}-test-uptime"
+ register: instance_facts
+
+ - name: "Confirm there is no running instance"
+ assert:
+ that:
+ - "{{ instance_facts.instances | length }} == 0"
+
+ - name: Sleep for 61 seconds and continue with play
+ wait_for:
+ timeout: 61
+ delegate_to: localhost
+
+ - name: "check using uptime 1 minute"
+ ec2_instance_info:
+ region: "{{ ec2_region }}"
+ uptime: 1
+ filters:
+ instance-state-name: [ "running"]
+ "tag:Name": "{{ resource_prefix }}-test-uptime"
+ register: instance_facts
+
+ - name: "Confirm there is one running instance"
+ assert:
+ that:
+ - "{{ instance_facts.instances | length }} == 1"
+
+ always:
+ - name: "Terminate instances"
+ ec2_instance:
+ state: absent
+ filters:
+ "tag:TestId": "{{ ec2_instance_tag_TestId }}"
+ wait: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/runme.sh ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/runme.sh
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/runme.sh 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_instance/runme.sh 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,12 @@
+#!/usr/bin/env bash
+#
+# Beware: most of our tests here are run in parallel.
+# To add new tests you'll need to add a new host to the inventory and a matching
+# '{{ inventory_hostname }}'.yml file in roles/ec2_instance/tasks/
+
+
+set -eux
+
+export ANSIBLE_ROLES_PATH=../
+
+ansible-playbook main.yml -i inventory "$@"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1,5 @@
+# reason: missing-dependency
+# We need either the openssl binary, pycrpto, or a compiler on the Py36 and Py38
+# Zuul nodes
+# https://github.com/ansible-collections/amazon.aws/issues/428
cloud/aws
-shippable/aws/group2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_key/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -24,6 +24,18 @@
- 'result.msg == "missing required arguments: name"'
# ============================================================
+ - name: test removing a non-existent key pair (check mode)
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: absent
+ register: result
+ check_mode: true
+
+ - name: assert removing a non-existent key pair
+ assert:
+ that:
+ - 'not result.changed'
+
- name: test removing a non-existent key pair
ec2_key:
name: '{{ ec2_key_name }}'
@@ -36,23 +48,269 @@
- 'not result.changed'
# ============================================================
+ - name: test creating a new key pair (check_mode)
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ snake_case: 'a_snake_case_value'
+ CamelCase: 'CamelCaseValue'
+ "spaced key": 'Spaced value'
+ register: result
+ check_mode: true
+
+ - name: assert creating a new key pair
+ assert:
+ that:
+ - result is changed
+
- name: test creating a new key pair
ec2_key:
name: '{{ ec2_key_name }}'
state: present
+ tags:
+ snake_case: 'a_snake_case_value'
+ CamelCase: 'CamelCaseValue'
+ "spaced key": 'Spaced value'
register: result
- name: assert creating a new key pair
assert:
that:
- - 'result.changed'
- - '"key" in result'
- - '"name" in result.key'
- - '"fingerprint" in result.key'
- - '"private_key" in result.key'
- - 'result.key.name == "{{ec2_key_name}}"'
+ - result is changed
+ - '"key" in result'
+ - '"name" in result.key'
+ - '"fingerprint" in result.key'
+ - '"private_key" in result.key'
+ - '"id" in result.key'
+ - '"tags" in result.key'
+ - result.key.name == ec2_key_name
+ - result.key.id.startswith('key-')
+ - '"snake_case" in result.key.tags'
+ - result.key.tags['snake_case'] == 'a_snake_case_value'
+ - '"CamelCase" in result.key.tags'
+ - result.key.tags['CamelCase'] == 'CamelCaseValue'
+ - '"spaced key" in result.key.tags'
+ - result.key.tags['spaced key'] == 'Spaced value'
+
+ - set_fact:
+ key_id_1: '{{ result.key.id }}'
+
+ - name: 'test re-"creating" the same key (check_mode)'
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ snake_case: 'a_snake_case_value'
+ CamelCase: 'CamelCaseValue'
+ "spaced key": 'Spaced value'
+ register: result
+ check_mode: true
+
+ - name: assert re-creating the same key
+ assert:
+ that:
+ - result is not changed
+
+ - name: 'test re-"creating" the same key'
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ snake_case: 'a_snake_case_value'
+ CamelCase: 'CamelCaseValue'
+ "spaced key": 'Spaced value'
+ register: result
+
+ # ============================================================
+ - name: test updating tags without purge (check mode)
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: false
+ register: result
+ check_mode: true
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is changed
+
+ - name: test updating tags without purge
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: false
+ register: result
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is changed
+ - '"key" in result'
+ - '"name" in result.key'
+ - '"fingerprint" in result.key'
+ - '"private_key" not in result.key'
+ - '"id" in result.key'
+ - result.key.id == key_id_1
+ - '"tags" in result.key'
+ - result.key.name == ec2_key_name
+ - '"snake_case" in result.key.tags'
+ - result.key.tags['snake_case'] == 'a_snake_case_value'
+ - '"CamelCase" in result.key.tags'
+ - result.key.tags['CamelCase'] == 'CamelCaseValue'
+ - '"spaced key" in result.key.tags'
+ - result.key.tags['spaced key'] == 'Spaced value'
+ - '"newKey" in result.key.tags'
+ - result.key.tags['newKey'] == 'Another value'
+
+ - name: test updating tags without purge - idempotency (check mode)
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: false
+ register: result
+ check_mode: true
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is not changed
+
+ - name: test updating tags without purge - idempotency
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: false
+ register: result
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is not changed
+ - '"key" in result'
+ - '"name" in result.key'
+ - '"fingerprint" in result.key'
+ - '"private_key" not in result.key'
+ - '"id" in result.key'
+ - '"tags" in result.key'
+ - result.key.name == ec2_key_name
+ - result.key.id == key_id_1
+ - '"snake_case" in result.key.tags'
+ - result.key.tags['snake_case'] == 'a_snake_case_value'
+ - '"CamelCase" in result.key.tags'
+ - result.key.tags['CamelCase'] == 'CamelCaseValue'
+ - '"spaced key" in result.key.tags'
+ - result.key.tags['spaced key'] == 'Spaced value'
+ - '"newKey" in result.key.tags'
+ - result.key.tags['newKey'] == 'Another value'
# ============================================================
+ - name: test updating tags with purge (check mode)
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: true
+ register: result
+ check_mode: true
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is changed
+
+ - name: test updating tags with purge
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: true
+ register: result
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is changed
+ - '"key" in result'
+ - '"name" in result.key'
+ - '"fingerprint" in result.key'
+ - '"private_key" not in result.key'
+ - '"id" in result.key'
+ - result.key.id == key_id_1
+ - '"tags" in result.key'
+ - result.key.name == ec2_key_name
+ - '"snake_case" not in result.key.tags'
+ - '"CamelCase" not in result.key.tags'
+ - '"spaced key" not in result.key.tags'
+ - '"newKey" in result.key.tags'
+ - result.key.tags['newKey'] == 'Another value'
+
+ - name: test updating tags with purge - idempotency (check mode)
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: true
+ register: result
+ check_mode: true
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is not changed
+
+ - name: test updating tags with purge - idempotency
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: present
+ tags:
+ newKey: 'Another value'
+ purge_tags: true
+ register: result
+
+ - name: assert updated tags
+ assert:
+ that:
+ - result is not changed
+ - '"key" in result'
+ - '"name" in result.key'
+ - '"fingerprint" in result.key'
+ - '"private_key" not in result.key'
+ - '"id" in result.key'
+ - '"tags" in result.key'
+ - result.key.name == ec2_key_name
+ - result.key.id == key_id_1
+ - '"snake_case" not in result.key.tags'
+ - '"CamelCase" not in result.key.tags'
+ - '"spaced key" not in result.key.tags'
+ - '"newKey" in result.key.tags'
+ - result.key.tags['newKey'] == 'Another value'
+
+ # ============================================================
+ - name: test removing an existent key (check mode)
+ ec2_key:
+ name: '{{ ec2_key_name }}'
+ state: absent
+ register: result
+ check_mode: true
+
+ - name: assert removing an existent key
+ assert:
+ that:
+ - result is changed
+
- name: test removing an existent key
ec2_key:
name: '{{ ec2_key_name }}'
@@ -62,9 +320,9 @@
- name: assert removing an existent key
assert:
that:
- - 'result.changed'
+ - result is changed
- '"key" in result'
- - 'result.key == None'
+ - result.key == None
# ============================================================
- name: test state=present with key_material
@@ -77,13 +335,15 @@
- name: assert state=present with key_material
assert:
that:
- - 'result.changed == True'
- - '"key" in result'
- - '"name" in result.key'
- - '"fingerprint" in result.key'
- - '"private_key" not in result.key'
- - 'result.key.name == "{{ec2_key_name}}"'
- - 'result.key.fingerprint == "{{fingerprint}}"'
+ - 'result.changed == True'
+ - '"key" in result'
+ - '"name" in result.key'
+ - '"fingerprint" in result.key'
+ - '"private_key" not in result.key'
+ - '"id" in result.key'
+ - '"tags" in result.key'
+ - 'result.key.name == "{{ec2_key_name}}"'
+ - 'result.key.fingerprint == "{{fingerprint}}"'
# ============================================================
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,6 @@
+# We're dependent on AWS actually starting up the instances in a timely manner.
+# This doesn't always happen...
+unstable
+
non_local
cloud/aws
-shippable/aws/group4
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,4 @@
+dependencies:
+- prepare_tests
+- setup_ec2_facts
+- setup_sshkey
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/setup.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/setup.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/setup.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/setup.yml 2021-11-12 18:13:53.000000000 +0000
@@ -16,23 +16,16 @@
vpc_seed: '{{ resource_prefix }}'
vpc_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.0.0/16'
subnet_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.32.0/24'
- ec2_ami_name: 'amzn2-ami-hvm-2.*-x86_64-gp2'
- sshkey_file: '{{ resource_prefix }}_key'
tasks:
- - name: Create an ssh key
- shell: echo 'y' | ssh-keygen -P '' -f ../{{ sshkey_file }}
+ - include_role:
+ name: '../setup_sshkey'
+ - include_role:
+ name: '../setup_ec2_facts'
- - name: Get available AZs
- aws_az_info:
- filters:
- region-name: "{{ aws_region }}"
- register: az_info
-
- - name: Pick an AZ
- set_fact:
- availability_zone: "{{ az_info['availability_zones'][0]['zone_name'] }}"
+ - set_fact:
+ availability_zone: '{{ ec2_availability_zone_names[0] }}'
# ============================================================
- name: create a VPC
@@ -97,39 +90,32 @@
- name: Create a key
ec2_key:
name: '{{ resource_prefix }}'
- key_material: "{{ lookup('file', '../' ~ sshkey_file ~ '.pub') }}"
+ key_material: '{{ key_material }}'
state: present
register: ec2_key_result
- - name: Get a list of images
- ec2_ami_info:
- filters:
- owner-alias: amazon
- name: "amzn2-ami-minimal-hvm-*"
- description: "Amazon Linux 2 AMI *"
- register: images_info
-
- name: Set facts to simplify use of extra resources
set_fact:
vpc_subnet_id: "{{ vpc_subnet_result.subnet.id }}"
vpc_sg_id: "{{ vpc_sg_result.group_id }}"
vpc_igw_id: "{{ igw_result.gateway_id }}"
vpc_route_table_id: "{{ public_route_table.route_table.id }}"
- image_id: "{{ images_info.images | sort(attribute='creation_date') | reverse | first | json_query('image_id') }}"
ec2_key_name: "{{ ec2_key_result.key.name }}"
- name: Create an instance to test with
ec2_instance:
+ state: running
name: "{{ resource_prefix }}-ec2-metadata-facts"
- image_id: "{{ image_id }}"
+ image_id: "{{ ec2_ami_id }}"
vpc_subnet_id: "{{ vpc_subnet_id }}"
security_group: "{{ vpc_sg_id }}"
instance_type: t2.micro
key_name: "{{ ec2_key_name }}"
network:
assign_public_ip: true
- wait: true
- wait_timeout: 300
+ delete_on_termination: true
+ wait: True
+
register: ec2_instance
- set_fact:
@@ -139,3 +125,8 @@
template:
src: ../templates/inventory.j2
dest: ../inventory
+
+ - wait_for:
+ port: 22
+ host: '{{ ec2_instance.instances[0].public_ip_address }}'
+ timeout: 1200
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/teardown.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/teardown.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/teardown.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/teardown.yml 2021-11-12 18:13:53.000000000 +0000
@@ -22,6 +22,8 @@
wait: True
ignore_errors: true
retries: 5
+ register: remove
+ until: remove is successful
- name: remove ssh key
ec2_key:
@@ -29,13 +31,6 @@
state: absent
ignore_errors: true
- - name: remove the security group
- ec2_group:
- group_id: "{{ vpc_sg_id }}"
- state: absent
- ignore_errors: true
- retries: 5
-
- name: remove the public route table
ec2_vpc_route_table:
vpc_id: "{{ vpc_id }}"
@@ -44,22 +39,37 @@
state: absent
ignore_errors: true
retries: 5
+ register: remove
+ until: remove is successful
- - name: remove the subnet
- ec2_vpc_subnet:
- cidr: "{{ vpc_cidr }}"
- az: "{{ availability_zone }}"
+ - name: remove the internet gateway
+ ec2_vpc_igw:
vpc_id: "{{ vpc_id }}"
state: absent
ignore_errors: true
retries: 5
+ register: remove
+ until: remove is successful
- - name: remove the internet gateway
- ec2_vpc_igw:
+ - name: remove the security group
+ ec2_group:
+ group_id: "{{ vpc_sg_id }}"
+ state: absent
+ ignore_errors: true
+ retries: 5
+ register: remove
+ until: remove is successful
+
+ - name: remove the subnet
+ ec2_vpc_subnet:
+ cidr: "{{ vpc_cidr }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_id }}"
state: absent
ignore_errors: true
retries: 5
+ register: remove
+ until: remove is successful
- name: remove the VPC
ec2_vpc_net:
@@ -68,3 +78,5 @@
state: absent
ignore_errors: true
retries: 5
+ register: remove
+ until: remove is successful
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/test_metadata.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/test_metadata.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/test_metadata.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/playbooks/test_metadata.yml 2021-11-12 18:13:53.000000000 +0000
@@ -6,11 +6,11 @@
wait_for_connection:
- amazon.aws.ec2_metadata_facts:
-
+
- name: Assert initial metadata for the instance
assert:
that:
- ansible_ec2_ami_id == image_id
- - ansible_ec2_placement_availability_zone == "{{ availability_zone }}"
+ - ansible_ec2_placement_availability_zone == availability_zone
- ansible_ec2_security_groups == "{{ resource_prefix }}-sg"
- ansible_ec2_user_data == "None"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/templates/inventory.j2 ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/templates/inventory.j2
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/templates/inventory.j2 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_metadata_facts/templates/inventory.j2 2021-11-12 18:13:53.000000000 +0000
@@ -2,8 +2,8 @@
"{{ ec2_instance.instances[0].public_ip_address }}"
[testhost:vars]
-ansible_user=ec2-user
-ansible_ssh_private_key_file="{{ sshkey_file }}"
+ansible_user={{ ec2_ami_ssh_user }}
+ansible_ssh_private_key_file="{{ sshkey }}"
ansible_python_interpreter=/usr/bin/env python
[all:vars]
@@ -16,5 +16,5 @@
vpc_route_table_id="{{ vpc_route_table_id }}"
ec2_key_name="{{ ec2_key_name }}"
availability_zone="{{ availability_zone }}"
-image_id="{{ image_id }}"
+image_id="{{ ec2_ami_id }}"
ec2_instance_id="{{ ec2_instance_id }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,10 @@
+# reason: unstable
+# Testing of paginated results fails when fewer results are returned than
+# expected - probably a race condition
+# https://github.com/ansible-collections/amazon.aws/issues/441
+disabled
+
+slow
+
cloud/aws
-shippable/aws/group4
ec2_snapshot_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+dependencies:
+- role: setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_snapshot/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -9,9 +9,6 @@
# Tests ec2_snapshot_info:
# - Listing snapshots for filter: tag
#
-# Possible Bugs:
-# - check_mode not supported
-#
- name: Integration testing for ec2_snapshot
module_defaults:
group/aws:
@@ -23,28 +20,42 @@
collections:
- community.aws
-
block:
- - ec2_ami_info:
- owners: amazon
- filters:
- architecture: x86_64
- virtualization-type: hvm
- root-device-type: ebs
- name: "amzn-ami-hvm*"
- register: amis
+ - name: Gather availability zones
+ aws_az_facts:
+ register: azs
+
+ # Create a new volume in detached mode without tags
+ - name: Create a detached volume without tags
+ ec2_vol:
+ volume_size: 1
+ zone: '{{ azs.availability_zones[0].zone_name }}'
+ register: volume_detached
+
+ # Capture snapshot of this detached volume and assert the results
+ - name: Create a snapshot of detached volume without tags and store results
+ ec2_snapshot:
+ volume_id: '{{ volume_detached.volume_id }}'
+ register: untagged_snapshot
+
+ - assert:
+ that:
+ - untagged_snapshot is changed
+ - untagged_snapshot.snapshots| length == 1
+ - untagged_snapshot.snapshots[0].volume_id == volume_detached.volume_id
- - name: Setup an instance for testing
+ - name: Setup an instance for testing, make sure volumes are attached before next task
ec2_instance:
name: '{{ resource_prefix }}'
instance_type: t2.nano
- image_id: "{{ (amis.images | sort(attribute='creation_date') | last).image_id }}"
- wait: yes
+ image_id: '{{ ec2_ami_id }}'
volumes:
- device_name: /dev/xvda
ebs:
volume_size: 8
delete_on_termination: true
+ state: running
+ wait: true
register: instance
- set_fact:
@@ -52,17 +63,18 @@
instance_id: '{{ instance.instances[0].instance_id }}'
device_name: '{{ instance.instances[0].block_device_mappings[0].device_name }}'
-# JR: Check mode not supported
-# - name: Take snapshot (check mode)
-# ec2_snapshot:
-# instance_id: '{{ instance_id }}'
-# check_mode: true
-# snapshot_tags:
-# Test: '{{ resource_prefix }}'
-# register: result
-# - assert:
-# that:
-# - result is changed
+ - name: Take snapshot (check mode)
+ ec2_snapshot:
+ instance_id: '{{ instance_id }}'
+ device_name: '{{ device_name }}'
+ snapshot_tags:
+ Test: '{{ resource_prefix }}'
+ check_mode: true
+ register: result
+
+ - assert:
+ that:
+ - result is changed
- name: Take snapshot of volume
ec2_snapshot:
@@ -91,7 +103,7 @@
filters:
"tag:Name": '{{ resource_prefix }}'
register: info_check
- check_mode: yes
+ check_mode: true
- assert:
that:
@@ -102,18 +114,17 @@
- info_check.snapshots[0].volume_size == result.volume_size
- info_check.snapshots[0].tags == result.tags
-# JR: Check mode not supported
-# - name: Take snapshot if most recent >1hr (False) (check mode)
-# ec2_snapshot:
-# volume_id: '{{ volume_id }}'
-# snapshot_tags:
-# Name: '{{ resource_prefix }}'
-# last_snapshot_min_age: 60
-# check_mode: true
-# register: result
-# - assert:
-# that:
-# - result is not changed
+ - name: Take snapshot if most recent >1hr (False) (check mode)
+ ec2_snapshot:
+ volume_id: '{{ volume_id }}'
+ snapshot_tags:
+ Name: '{{ resource_prefix }}'
+ last_snapshot_min_age: 60
+ check_mode: true
+ register: result
+ - assert:
+ that:
+ - result is not changed
- name: Take snapshot if most recent >1hr (False)
ec2_snapshot:
@@ -136,18 +147,17 @@
pause:
minutes: 1
-# JR: Check mode not supported
-# - name: Take snapshot if most recent >1min (True) (check mode)
-# ec2_snapshot:
-# volume_id: '{{ volume_id }}'
-# snapshot_tags:
-# Name: '{{ resource_prefix }}'
-# last_snapshot_min_age: 1
-# check_mode: true
-# register: result
-# - assert:
-# that:
-# - result is changed
+ - name: Take snapshot if most recent >1min (True) (check mode)
+ ec2_snapshot:
+ volume_id: '{{ volume_id }}'
+ snapshot_tags:
+ Name: '{{ resource_prefix }}'
+ last_snapshot_min_age: 1
+ check_mode: true
+ register: result
+ - assert:
+ that:
+ - result is changed
- name: Take snapshot if most recent >1min (True)
ec2_snapshot:
@@ -165,23 +175,18 @@
that:
- result is changed
- info_result.snapshots| length == 2
- - '"{{ result.snapshot_id }}" in "{{ info_result| community.general.json_query("snapshots[].snapshot_id") }}"'
+ - result.snapshot_id in ( info_result.snapshots | map(attribute='snapshot_id') | list )
-# JR: Check mode not supported
-# - name: Take snapshot with a tag (check mode)
-# ec2_snapshot:
-# volume_id: '{{ volume_id }}'
-# snapshot_tags:
-# MyTag: '{{ resource_prefix }}'
-# register: result
-# - assert:
-# that:
-# - result is changed
-
- # Wait at least 15 seconds between concurrent volume snapshots.
- - name: Prevent SnapshotCreationPerVolumeRateExceeded errors
- pause:
- seconds: 15
+ - name: Take snapshot with a tag (check mode)
+ ec2_snapshot:
+ volume_id: '{{ volume_id }}'
+ snapshot_tags:
+ MyTag: '{{ resource_prefix }}'
+ check_mode: true
+ register: result
+ - assert:
+ that:
+ - result is changed
- name: Take snapshot and tag it
ec2_snapshot:
@@ -215,14 +220,31 @@
- assert:
that:
- - info_result.snapshots| length == 3
+ - info_result.snapshots | length == 3
+
+ - name: Generate extra snapshots
+ ec2_snapshot:
+ volume_id: '{{ volume_id }}'
+ snapshot_tags:
+ ResourcePrefix: '{{ resource_prefix }}'
+ loop: '{{ range(1, 6, 1) | list }}'
+ loop_control:
+ # Anything under 15 will trigger SnapshotCreationPerVolumeRateExceeded,
+ # this should now be automatically handled, but pause a little anyway to
+ # avoid being aggressive
+ pause: 10
+ label: "Generate extra snapshots - {{ item }}"
+
+ - name: Pause to allow creation to finish
+ pause:
+ minutes: 2
# check that snapshot_ids and max_results are mutually exclusive
- name: Check that max_results and snapshot_ids are mutually exclusive
ec2_snapshot_info:
snapshot_ids:
- '{{ tagged_snapshot_id }}'
- max_results: 1
+ max_results: 5
ignore_errors: true
register: info_result
@@ -250,12 +272,12 @@
ec2_snapshot_info:
filters:
"tag:Name": '{{ resource_prefix }}'
- max_results: 1
+ max_results: 5
register: info_result
- assert:
that:
- - info_result.snapshots | length == 1
+ - info_result.snapshots | length == 5
- info_result.next_token_id is defined
# Pagination : 2nd request
@@ -268,8 +290,19 @@
- assert:
that:
- - info_result.snapshots | length == 2
- - info_result.next_token_id is defined
+ - info_result.snapshots | length == 3
+
+ # delete the tagged snapshot - check mode
+ - name: Delete the tagged snapshot (check mode)
+ ec2_snapshot:
+ state: absent
+ snapshot_id: '{{ tagged_snapshot_id }}'
+ register: delete_result_check_mode
+ check_mode: true
+
+ - assert:
+ that:
+ - delete_result_check_mode is changed
# delete the tagged snapshot
- name: Delete the tagged snapshot
@@ -277,6 +310,28 @@
state: absent
snapshot_id: '{{ tagged_snapshot_id }}'
+ # delete the tagged snapshot again (results in InvalidSnapshot.NotFound)
+ - name: Delete already removed snapshot (check mode)
+ ec2_snapshot:
+ state: absent
+ snapshot_id: '{{ tagged_snapshot_id }}'
+ register: delete_result_second_check_mode
+ check_mode: true
+
+ - assert:
+ that:
+ - delete_result_second_check_mode is not changed
+
+ - name: Delete already removed snapshot (idempotent)
+ ec2_snapshot:
+ state: absent
+ snapshot_id: '{{ tagged_snapshot_id }}'
+ register: delete_result_second_idempotent
+
+ - assert:
+ that:
+ - delete_result_second_idempotent is not changed
+
- name: Get info about all snapshots for this test
ec2_snapshot_info:
filters:
@@ -285,8 +340,8 @@
- assert:
that:
- - info_result.snapshots| length == 2
- - '"{{ tagged_snapshot_id }}" not in "{{ info_result| community.general.json_query("snapshots[].snapshot_id") }}"'
+ - info_result.snapshots| length == 7
+ - tagged_snapshot_id not in ( info_result.snapshots | map(attribute='snapshot_id') | list )
- name: Delete snapshots
ec2_snapshot:
@@ -330,3 +385,15 @@
id: '{{ volume_id }}'
state: absent
ignore_errors: true
+
+ - name: Delete detached and untagged volume
+ ec2_vol:
+ id: '{{ volume_detached.volume_id}}'
+ state: absent
+ ignore_errors: true
+
+ - name: Delete untagged snapshot
+ ec2_snapshot:
+ state: absent
+ snapshot_id: '{{ untagged_snapshot.snapshot_id }}'
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+cloud/aws
+ec2_spot_instance_info
\ No newline at end of file
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,14 @@
+---
+vpc_seed_a: '{{ resource_prefix }}'
+vpc_seed_b: '{{ resource_prefix }}-ec2_eni'
+vpc_prefix: '10.{{ 256 | random(seed=vpc_seed_a) }}.{{ 256 | random(seed=vpc_seed_b ) }}'
+vpc_cidr: '{{ vpc_prefix}}.128/26'
+ip_1: "{{ vpc_prefix }}.132"
+ip_2: "{{ vpc_prefix }}.133"
+ip_3: "{{ vpc_prefix }}.134"
+ip_4: "{{ vpc_prefix }}.135"
+ip_5: "{{ vpc_prefix }}.136"
+
+ec2_ips:
+- "{{ vpc_prefix }}.137"
+- "{{ vpc_prefix }}.138"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/tasks/main.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/tasks/main.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/tasks/main.yaml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_spot_instance/tasks/main.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,321 @@
+---
+- module_defaults:
+ group/aws:
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ security_token: "{{ security_token | default(omit) }}"
+ region: "{{ aws_region }}"
+
+ collections:
+ - community.aws
+
+ block:
+ - name: Get available AZs
+ aws_az_info:
+ filters:
+ region-name: "{{ aws_region }}"
+ register: az_info
+
+ - name: Pick an AZ
+ set_fact:
+ availability_zone: "{{ az_info['availability_zones'][0]['zone_name'] }}"
+
+ # ============================================================
+ - name: create a VPC
+ ec2_vpc_net:
+ name: "{{ resource_prefix }}-vpc"
+ state: present
+ cidr_block: "{{ vpc_cidr }}"
+ tags:
+ Name: "{{ resource_prefix }}-vpc"
+ Description: "Created by ansible-test"
+ register: vpc_result
+
+ - name: create a subnet
+ ec2_vpc_subnet:
+ cidr: "{{ vpc_cidr }}"
+ az: "{{ availability_zone }}"
+ vpc_id: "{{ vpc_result.vpc.id }}"
+ tags:
+ Name: "{{ resource_prefix }}-vpc"
+ Description: "Created by ansible-test"
+ state: present
+ register: vpc_subnet_result
+
+ - name: create a security group
+ ec2_group:
+ name: "{{ resource_prefix }}-sg"
+ description: "Created by {{ resource_prefix }}"
+ rules: []
+ state: present
+ vpc_id: "{{ vpc_result.vpc.id }}"
+ register: vpc_sg_result
+
+ - name: Get a list of images
+ ec2_ami_info:
+ filters:
+ owner-alias: amazon
+ name: "amzn2-ami-minimal-hvm-*"
+ description: "Amazon Linux 2 AMI *"
+ register: images_info
+
+ - name: create a new ec2 key pair
+ ec2_key:
+ name: "{{ resource_prefix }}-keypair"
+
+ - name: Set facts to simplify use of extra resources
+ set_fact:
+ vpc_id: "{{ vpc_result.vpc.id }}"
+ vpc_subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ vpc_sg_id: "{{ vpc_sg_result.group_id }}"
+ image_id: "{{ images_info.images | sort(attribute='creation_date') | reverse | first | json_query('image_id') }}"
+
+ # ============================================================
+
+ # Assert that spot instance request is created
+ - name: Create simple spot instance request
+ ec2_spot_instance:
+ launch_specification:
+ image_id: "{{ image_id }}"
+ key_name: "{{ resource_prefix }}-keypair"
+ instance_type: "t2.medium"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ tags:
+ ansible-test: "{{ resource_prefix }}"
+ register: create_result
+
+ - name: Assert that result has changed and request has been created
+ assert:
+ that:
+ - create_result is changed
+ - create_result.spot_request is defined
+ - create_result.spot_request.spot_instance_request_id is defined
+ - create_result.spot_request.launch_specification.subnet_id == vpc_subnet_result.subnet.id
+
+ - name: Get info about the spot instance request created
+ ec2_spot_instance_info:
+ spot_instance_request_ids:
+ - "{{ create_result.spot_request.spot_instance_request_id }}"
+ register: spot_instance_info_result
+
+ - name: Assert that the spot request created is open or active
+ assert:
+ that:
+ - spot_instance_info_result.spot_request[0].state in ['open', 'active']
+
+ - name: Create spot request with more complex options
+ ec2_spot_instance:
+ launch_specification:
+ image_id: "{{ image_id }}"
+ key_name: "{{ resource_prefix }}-keypair"
+ instance_type: "t2.medium"
+ block_device_mappings:
+ - device_name: /dev/sdb
+ ebs:
+ delete_on_termination: True
+ volume_type: gp3
+ volume_size: 5
+ network_interfaces:
+ - associate_public_ip_address: False
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ delete_on_termination: True
+ device_index: 0
+ placement:
+ availability_zone: '{{ availability_zone }}'
+ monitoring:
+ enabled: False
+ spot_price: 0.002
+ tags:
+ camelCase: "helloWorld"
+ PascalCase: "HelloWorld"
+ snake_case: "hello_world"
+ "Title Case": "Hello World"
+ "lowercase spaced": "hello world"
+ ansible-test: "{{ resource_prefix }}"
+ register: complex_create_result
+
+ - assert:
+ that:
+ - complex_create_result is changed
+ - complex_create_result.spot_request is defined
+ - complex_create_result.spot_request.spot_instance_request_id is defined
+ - complex_create_result.spot_request.type == 'one-time'
+ - '"0.002" in complex_create_result.spot_request.spot_price' ## AWS pads trailing zeros on the spot price
+ - launch_spec.placement.availability_zone == availability_zone
+ - launch_spec.block_device_mappings|length == 1
+ - launch_spec.block_device_mappings.0.ebs.delete_on_termination == true
+ - launch_spec.block_device_mappings.0.ebs.volume_type == 'gp3'
+ - launch_spec.block_device_mappings.0.ebs.volume_size == 5
+ - launch_spec.network_interfaces|length == 1
+ - launch_spec.network_interfaces.0.device_index == 0
+ - launch_spec.network_interfaces.0.associate_public_ip_address == false
+ - launch_spec.network_interfaces.0.delete_on_termination == true
+ - spot_request_tags|length == 6
+ - spot_request_tags['camelCase'] == 'helloWorld'
+ - spot_request_tags['PascalCase'] == 'HelloWorld'
+ - spot_request_tags['snake_case'] == 'hello_world'
+ - spot_request_tags['Title Case'] == 'Hello World'
+ - spot_request_tags['lowercase spaced'] == 'hello world'
+ vars:
+ launch_spec: '{{ complex_create_result.spot_request.launch_specification }}'
+ spot_request_tags: '{{ complex_create_result.spot_request.tags }}'
+
+ - name: Get info about the complex spot instance request created
+ ec2_spot_instance_info:
+ spot_instance_request_ids:
+ - "{{ complex_create_result.spot_request.spot_instance_request_id }}"
+ register: complex_info_result
+
+ - name: Assert that the complex spot request created is open/active and correct keys are set
+ assert:
+ that:
+ - complex_info_result.spot_request[0].state in ['open', 'active']
+ - complex_create_result.spot_request.spot_price == complex_info_result.spot_request[0].spot_price
+ - create_launch_spec.block_device_mappings[0].ebs.volume_size == info_launch_spec.block_device_mappings[0].ebs.volume_size
+ - create_launch_spec.block_device_mappings[0].ebs.volume_type == info_launch_spec.block_device_mappings[0].ebs.volume_type
+ - create_launch_spec.network_interfaces[0].delete_on_termination == info_launch_spec.network_interfaces[0].delete_on_termination
+ vars:
+ create_launch_spec: "{{ complex_create_result.spot_request.launch_specification }}"
+ info_launch_spec: "{{ complex_info_result.spot_request[0].launch_specification }}"
+
+ - name: Get info about the created spot instance requests and filter result based on provided filters
+ ec2_spot_instance_info:
+ spot_instance_request_ids:
+ - '{{ create_result.spot_request.spot_instance_request_id }}'
+ - '{{ complex_create_result.spot_request.spot_instance_request_id }}'
+ filters:
+ tag:ansible-test: "{{ resource_prefix }}"
+ launch.block-device-mapping.device-name: /dev/sdb
+ register: spot_instance_info_filter_result
+
+ - name: Assert that the correct spot request was returned in the filtered result
+ assert:
+ that:
+ - spot_instance_info_filter_result.spot_request[0].spot_instance_request_id == complex_create_result.spot_request.spot_instance_request_id
+
+ # Assert check mode
+ - name: Create spot instance request (check_mode)
+ ec2_spot_instance:
+ launch_specification:
+ image_id: "{{ image_id }}"
+ key_name: "{{ resource_prefix }}-keypair"
+ instance_type: "t2.medium"
+ subnet_id: "{{ vpc_subnet_result.subnet.id }}"
+ tags:
+ ansible-test: "{{ resource_prefix }}"
+ check_mode: True
+ register: check_create_result
+
+ - assert:
+ that:
+ - check_create_result is changed
+
+ - name: Remove spot instance request (check_mode)
+ ec2_spot_instance:
+ spot_instance_request_ids: '{{ create_result.spot_request.spot_instance_request_id }}'
+ state: absent
+ check_mode: True
+ register: check_cancel_result
+
+ - assert:
+ that:
+ - check_cancel_result is changed
+
+ - name: Remove spot instance requests
+ ec2_spot_instance:
+ spot_instance_request_ids:
+ - '{{ create_result.spot_request.spot_instance_request_id }}'
+ - '{{ complex_create_result.spot_request.spot_instance_request_id }}'
+ state: absent
+ register: cancel_result
+
+ - assert:
+ that:
+ - cancel_result is changed
+ - '"Cancelled Spot request" in cancel_result.msg'
+
+ - name: Sometimes we run the next test before the EC2 API is fully updated from the previous task
+ pause:
+ seconds: 3
+
+ - name: Check no change if request is already cancelled (idempotency)
+ ec2_spot_instance:
+ spot_instance_request_ids: '{{ create_result.spot_request.spot_instance_request_id }}'
+ state: absent
+ register: cancel_request_again
+
+ - assert:
+ that:
+ - cancel_request_again is not changed
+ - '"Spot request not found or already cancelled" in cancel_request_again.msg'
+
+ - name: Gracefully try to remove non-existent request (NotFound)
+ ec2_spot_instance:
+ spot_instance_request_ids:
+ - sir-12345678
+ state: absent
+ register: fake_cancel_result
+
+ - assert:
+ that:
+ - fake_cancel_result is not changed
+ - '"Spot request not found or already cancelled" in fake_cancel_result.msg'
+
+
+ always:
+
+ # ============================================================
+ - name: Delete spot instances
+ ec2_instance:
+ state: absent
+ filters:
+ vpc-id: "{{ vpc_result.vpc.id }}"
+
+ - name: get all spot requests created during test
+ ec2_spot_instance_info:
+ filters:
+ tag:ansible-test: "{{ resource_prefix }}"
+ register: spot_request_list
+
+ - name: remove spot instance requests
+ ec2_spot_instance:
+ spot_instance_request_ids:
+ - '{{ item.spot_instance_request_id }}'
+ state: 'absent'
+ ignore_errors: true
+ retries: 5
+ with_items: "{{ spot_request_list.spot_request }}"
+
+ - name: remove the security group
+ ec2_group:
+ name: "{{ resource_prefix }}-sg"
+ description: "{{ resource_prefix }}"
+ rules: []
+ state: absent
+ vpc_id: "{{ vpc_result.vpc.id }}"
+ ignore_errors: true
+ retries: 5
+
+ - name: remove the subnet
+ ec2_vpc_subnet:
+ cidr: "{{ vpc_cidr }}"
+ az: "{{ availability_zone }}"
+ vpc_id: "{{ vpc_result.vpc.id }}"
+ state: absent
+ ignore_errors: true
+ retries: 5
+ when: vpc_subnet_result is defined
+
+ - name: remove the VPC
+ ec2_vpc_net:
+ name: "{{ resource_prefix }}-vpc"
+ cidr_block: "{{ vpc_cidr }}"
+ state: absent
+ ignore_errors: true
+ retries: 5
+
+ - name: remove key pair by name
+ ec2_key:
+ name: "{{ resource_prefix }}-keypair"
+ state: absent
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_tag/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_tag/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_tag/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_tag/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,2 @@
cloud/aws
-shippable/aws/group2
ec2_tag_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,5 +1,2 @@
-slow
-
cloud/aws
-
ec2_vol_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,5 +1,8 @@
+availability_zone: '{{ ec2_availability_zone_names[0] }}'
+
vpc_name: '{{ resource_prefix }}-vpc'
vpc_seed: '{{ resource_prefix }}'
vpc_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.0.0/16'
subnet_cidr: '10.{{ 256 | random(seed=vpc_seed) }}.32.0/24'
-ec2_ami_name: 'amzn2-ami-hvm-2.*-x86_64-gp2'
\ No newline at end of file
+
+instance_name: '{{ resource_prefix }}-instance'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/main.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,5 +0,0 @@
-- hosts: localhost
- connection: local
- environment: "{{ ansible_test.environment }}"
- tasks:
- - include_tasks: 'tasks/main.yml'
\ No newline at end of file
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/meta/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1,5 @@
dependencies:
- - setup_remote_tmp_dir
\ No newline at end of file
+- role: setup_botocore_pip
+ vars:
+ botocore_version: '1.19.27'
+- role: setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,27 +1,940 @@
-- set_fact:
- virtualenv: "{{ remote_tmp_dir }}/virtualenv"
- virtualenv_command: "{{ ansible_python_interpreter }} -m virtualenv"
-
-- set_fact:
- virtualenv_interpreter: "{{ virtualenv }}/bin/python"
-
-- pip:
- name: virtualenv
-
-- pip:
- name:
- - 'boto3>=1.16.33'
- - 'botocore>=1.13.0'
- - 'coverage<5'
- - 'boto>=2.49.0'
- virtualenv: "{{ virtualenv }}"
- virtualenv_command: "{{ virtualenv_command }}"
- virtualenv_site_packages: no
-
-- include_tasks: tests.yml
- vars:
- ansible_python_interpreter: "{{ virtualenv_interpreter }}"
-
-- file:
- path: "{{ virtualenv }}"
- state: absent
+---
+- module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key | default(omit) }}'
+ aws_secret_key: '{{ aws_secret_key | default(omit) }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region | default(omit) }}'
+
+ collections:
+ - amazon.aws
+ - community.aws
+
+ block:
+
+ - name: Create a test VPC
+ ec2_vpc_net:
+ name: "{{ vpc_name }}"
+ cidr_block: "{{ vpc_cidr }}"
+ tags:
+ Name: ec2_vol testing
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: testing_vpc
+
+ - name: Create a test subnet
+ ec2_vpc_subnet:
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ cidr: "{{ subnet_cidr }}"
+ tags:
+ Name: ec2_vol testing
+ ResourcePrefix: "{{ resource_prefix }}"
+ az: '{{ availability_zone }}'
+ register: testing_subnet
+
+ - name: create an ec2 instance
+ ec2_instance:
+ name: "{{ instance_name }}"
+ vpc_subnet_id: "{{ testing_subnet.subnet.id }}"
+ instance_type: t3.nano
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: test_instance
+
+ - name: check task return attributes
+ assert:
+ that:
+ - test_instance.changed
+
+ - name: create another ec2 instance
+ ec2_instance:
+ name: "{{ instance_name }}-2"
+ vpc_subnet_id: "{{ testing_subnet.subnet.id }}"
+ instance_type: t3.nano
+ image_id: "{{ ec2_ami_id }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: test_instance_2
+
+ - name: check task return attributes
+ assert:
+ that:
+ - test_instance_2.changed
+
+ # # ==== ec2_vol tests ===============================================
+
+ - name: create a volume (validate module defaults - check_mode)
+ ec2_vol:
+ volume_size: 1
+ zone: "{{ availability_zone }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ check_mode: true
+ register: volume1_check_mode
+
+ - assert:
+ that:
+ - volume1_check_mode is changed
+
+
+ - name: create a volume (validate module defaults)
+ ec2_vol:
+ volume_size: 1
+ zone: "{{ availability_zone }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: volume1
+
+ - name: check task return attributes
+ assert:
+ that:
+ - volume1.changed
+ - "'volume' in volume1"
+ - "'volume_id' in volume1"
+ - "'volume_type' in volume1"
+ - "'device' in volume1"
+ - volume1.volume.status == 'available'
+ - volume1.volume_type == 'standard'
+ - "'attachment_set' in volume1.volume"
+ - volume1.volume.attachment_set | length == 0
+ - not ("Name" in volume1.volume.tags)
+ - not volume1.volume.encrypted
+ - volume1.volume.tags.ResourcePrefix == "{{ resource_prefix }}"
+
+ # no idempotency check needed here
+
+ - name: create another volume (override module defaults)
+ ec2_vol:
+ encrypted: yes
+ volume_size: 4
+ volume_type: io1
+ iops: 101
+ name: "{{ resource_prefix }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ zone: "{{ availability_zone }}"
+ register: volume2
+
+ - name: check task return attributes
+ assert:
+ that:
+ - volume2.changed
+ - "'volume' in volume2"
+ - "'volume_id' in volume2"
+ - "'volume_type' in volume2"
+ - "'device' in volume2"
+ - volume2.volume.status == 'available'
+ - volume2.volume_type == 'io1'
+ - volume2.volume.iops == 101
+ - volume2.volume.size == 4
+ - volume2.volume.tags.Name == "{{ resource_prefix }}"
+ - volume2.volume.encrypted
+ - volume2.volume.tags.ResourcePrefix == "{{ resource_prefix }}"
+
+ - name: create another volume (override module defaults) (idempotent)
+ ec2_vol:
+ encrypted: yes
+ volume_size: 4
+ volume_type: io1
+ iops: 101
+ name: "{{ resource_prefix }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ zone: "{{ availability_zone }}"
+ register: volume2_idem
+
+ - name: check task return attributes
+ assert:
+ that:
+ - not volume2_idem.changed
+
+ - name: create snapshot from volume
+ ec2_snapshot:
+ volume_id: "{{ volume1.volume_id }}"
+ description: "Resource Prefix - {{ resource_prefix }}"
+ snapshot_tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: vol1_snapshot
+
+ - name: check task return attributes
+ assert:
+ that:
+ - vol1_snapshot.changed
+
+ - name: create a volume from a snapshot (check_mode)
+ ec2_vol:
+ snapshot: "{{ vol1_snapshot.snapshot_id }}"
+ encrypted: yes
+ volume_type: gp2
+ volume_size: 1
+ zone: "{{ availability_zone }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ check_mode: true
+ register: volume3_check_mode
+
+ - name: check task return attributes
+ assert:
+ that:
+ - volume3_check_mode.changed
+
+ - name: create a volume from a snapshot
+ ec2_vol:
+ snapshot: "{{ vol1_snapshot.snapshot_id }}"
+ encrypted: yes
+ volume_type: gp2
+ volume_size: 1
+ zone: "{{ availability_zone }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: volume3
+
+ - name: check task return attributes
+ assert:
+ that:
+ - volume3.changed
+ - "volume3.volume.snapshot_id == vol1_snapshot.snapshot_id"
+
+ - name: Wait for instance to start
+ ec2_instance:
+ name: "{{ instance_name }}"
+ state: running
+ image_id: "{{ ec2_ami_id }}"
+ wait: True
+
+ - name: attach existing volume to an instance (check_mode)
+ ec2_vol:
+ id: "{{ volume1.volume_id }}"
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdg
+ delete_on_termination: no
+ check_mode: true
+ register: vol_attach_result_check_mode
+
+ - assert:
+ that:
+ - vol_attach_result_check_mode is changed
+
+ - name: attach existing volume to an instance
+ ec2_vol:
+ id: "{{ volume1.volume_id }}"
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdg
+ delete_on_termination: no
+ register: vol_attach_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - vol_attach_result.changed
+ - "'device' in vol_attach_result and vol_attach_result.device == '/dev/sdg'"
+ - "'volume' in vol_attach_result"
+ - vol_attach_result.volume.attachment_set[0].status in ['attached', 'attaching']
+ - vol_attach_result.volume.attachment_set[0].instance_id == test_instance.instance_ids[0]
+ - vol_attach_result.volume.attachment_set[0].device == '/dev/sdg'
+ - not vol_attach_result.volume.attachment_set[0].delete_on_termination
+
+ - name: attach existing volume to an instance (idempotent - check_mode)
+ ec2_vol:
+ id: "{{ volume1.volume_id }}"
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdg
+ delete_on_termination: no
+ check_mode: true
+ register: vol_attach_result_check_mode
+
+ - assert:
+ that:
+ - vol_attach_result_check_mode is not changed
+
+ - name: attach existing volume to an instance (idempotent)
+ ec2_vol:
+ id: "{{ volume1.volume_id }}"
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdg
+ delete_on_termination: no
+ register: vol_attach_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - "not vol_attach_result.changed"
+ - vol_attach_result.volume.attachment_set[0].status in ['attached', 'attaching']
+
+ - name: attach a new volume to an instance (check_mode)
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdh
+ volume_size: 1
+ volume_type: gp2
+ name: '{{ resource_prefix }} - sdh'
+ tags:
+ "lowercase spaced": 'hello cruel world'
+ "Title Case": 'Hello Cruel World'
+ CamelCase: 'SimpleCamelCase'
+ snake_case: 'simple_snake_case'
+ ResourcePrefix: "{{ resource_prefix }}"
+ check_mode: true
+ register: new_vol_attach_result_check_mode
+
+ - assert:
+ that:
+ - new_vol_attach_result_check_mode is changed
+
+ - name: attach a new volume to an instance
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdh
+ volume_size: 1
+ volume_type: gp2
+ name: '{{ resource_prefix }} - sdh'
+ tags:
+ "lowercase spaced": 'hello cruel world'
+ "Title Case": 'Hello Cruel World'
+ CamelCase: 'SimpleCamelCase'
+ snake_case: 'simple_snake_case'
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: new_vol_attach_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - new_vol_attach_result.changed
+ - "'device' in new_vol_attach_result and new_vol_attach_result.device == '/dev/sdh'"
+ - "'volume' in new_vol_attach_result"
+ - new_vol_attach_result.volume.attachment_set[0].status in ['attached', 'attaching']
+ - new_vol_attach_result.volume.attachment_set[0].instance_id == test_instance.instance_ids[0]
+ - new_vol_attach_result.volume.attachment_set[0].device == '/dev/sdh'
+ - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world'
+ - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World'
+ - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase'
+ - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case'
+ - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
+
+ - name: attach a new volume to an instance (idempotent - check_mode)
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdh
+ volume_size: 1
+ volume_type: gp2
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ check_mode: true
+ register: new_vol_attach_result_idem_check_mode
+ ignore_errors: true
+
+ - assert:
+ that:
+ - new_vol_attach_result_idem_check_mode is not changed
+
+ - name: attach a new volume to an instance (idempotent)
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdh
+ volume_size: 1
+ volume_type: gp2
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: new_vol_attach_result_idem
+ ignore_errors: true
+
+ - name: check task return attributes
+ assert:
+ that:
+ - "not new_vol_attach_result_idem.changed"
+ - "'Volume mapping for /dev/sdh already exists' in new_vol_attach_result_idem.msg"
+
+ - name: change some tag values
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ id: "{{ new_vol_attach_result.volume.id }}"
+ device_name: /dev/sdh
+ volume_size: 1
+ volume_type: gp2
+ tags:
+ "lowercase spaced": 'hello cruel world ❤️'
+ "Title Case": 'Hello Cruel World ❤️'
+ CamelCase: 'SimpleCamelCase ❤️'
+ snake_case: 'simple_snake_case ❤️'
+ register: new_vol_attach_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - new_vol_attach_result.changed
+ - "'volume_id' in new_vol_attach_result"
+ - new_vol_attach_result.volume_id == "{{ new_vol_attach_result.volume_id }}"
+ - "'attachment_set' in new_vol_attach_result.volume"
+ - "'create_time' in new_vol_attach_result.volume"
+ - "'id' in new_vol_attach_result.volume"
+ - "'size' in new_vol_attach_result.volume"
+ - new_vol_attach_result.volume.size == 1
+ - "'volume_type' in new_vol_attach_result"
+ - new_vol_attach_result.volume_type == 'gp2'
+ - "'tags' in new_vol_attach_result.volume"
+ - (new_vol_attach_result.volume.tags | length) == 6
+ - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world ❤️'
+ - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World ❤️'
+ - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase ❤️'
+ - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case ❤️'
+ - new_vol_attach_result.volume.tags["ResourcePrefix"] == resource_prefix
+ - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
+
+ - name: create a volume from a snapshot and attach to the instance (check_mode)
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdi
+ snapshot: "{{ vol1_snapshot.snapshot_id }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ check_mode: true
+ register: attach_new_vol_from_snapshot_result_check_mode
+
+ - assert:
+ that:
+ - attach_new_vol_from_snapshot_result_check_mode is changed
+
+
+ - name: create a volume from a snapshot and attach to the instance
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdi
+ snapshot: "{{ vol1_snapshot.snapshot_id }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: attach_new_vol_from_snapshot_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - attach_new_vol_from_snapshot_result.changed
+ - "'device' in attach_new_vol_from_snapshot_result and attach_new_vol_from_snapshot_result.device == '/dev/sdi'"
+ - "'volume' in attach_new_vol_from_snapshot_result"
+ - attach_new_vol_from_snapshot_result.volume.attachment_set[0].status in ['attached', 'attaching']
+ - attach_new_vol_from_snapshot_result.volume.attachment_set[0].instance_id == test_instance.instance_ids[0]
+
+ - name: list volumes attached to instance
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ state: list
+ register: inst_vols
+
+ - name: check task return attributes
+ assert:
+ that:
+ - not inst_vols.changed
+ - "'volumes' in inst_vols"
+ - inst_vols.volumes | length == 4
+
+ - name: get info on ebs volumes
+ ec2_vol_info:
+ register: ec2_vol_info
+
+ - name: check task return attributes
+ assert:
+ that:
+ - not ec2_vol_info.failed
+
+ - name: get info on ebs volumes
+ ec2_vol_info:
+ filters:
+ attachment.instance-id: "{{ test_instance.instance_ids[0] }}"
+ register: ec2_vol_info
+
+ - name: check task return attributes
+ assert:
+ that:
+ - ec2_vol_info.volumes | length == 4
+
+ - name: must not change because of missing parameter modify_volume
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ zone: "{{ availability_zone }}"
+ volume_type: gp3
+ register: changed_gp3_volume
+
+ - name: volume must not changed
+ assert:
+ that:
+ - not changed_gp3_volume.changed
+
+ - name: change existing volume to gp3 (check_mode)
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ zone: "{{ availability_zone }}"
+ volume_type: gp3
+ modify_volume: yes
+ check_mode: true
+ register: changed_gp3_volume_check_mode
+
+ - assert:
+ that:
+ - changed_gp3_volume_check_mode is changed
+
+ - name: change existing volume to gp3
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ zone: "{{ availability_zone }}"
+ volume_type: gp3
+ modify_volume: yes
+ register: changed_gp3_volume
+
+ - name: check that volume_type has changed
+ assert:
+ that:
+ - changed_gp3_volume.changed
+ - "'volume_id' in changed_gp3_volume"
+ - changed_gp3_volume.volume_id == "{{ new_vol_attach_result.volume_id }}"
+ - "'attachment_set' in changed_gp3_volume.volume"
+ - "'create_time' in changed_gp3_volume.volume"
+ - "'id' in changed_gp3_volume.volume"
+ - "'size' in changed_gp3_volume.volume"
+ - "'volume_type' in changed_gp3_volume"
+ - changed_gp3_volume.volume_type == 'gp3'
+ - "'iops' in changed_gp3_volume.volume"
+ - changed_gp3_volume.volume.iops == 3000
+ # Ensure our tags are still here
+ - "'tags' in changed_gp3_volume.volume"
+ - (changed_gp3_volume.volume.tags | length) == 6
+ - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world ❤️'
+ - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World ❤️'
+ - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase ❤️'
+ - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case ❤️'
+ - new_vol_attach_result.volume.tags["ResourcePrefix"] == resource_prefix
+ - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
+
+ - name: volume must be from type gp3 (idempotent)
+ vars:
+ ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ zone: "{{ availability_zone }}"
+ volume_type: gp3
+ modify_volume: yes
+ register: changed_gp3_volume
+ retries: 3
+ delay: 3
+ until: not changed_gp3_volume.failed
+ # retry because ebs change is to slow
+
+ - name: must not changed (idempotent)
+ assert:
+ that:
+ - not changed_gp3_volume.changed
+ - "'volume_id' in changed_gp3_volume"
+ - changed_gp3_volume.volume_id == "{{ new_vol_attach_result.volume_id }}"
+ - "'attachment_set' in changed_gp3_volume.volume"
+ - "'create_time' in changed_gp3_volume.volume"
+ - "'id' in changed_gp3_volume.volume"
+ - "'size' in changed_gp3_volume.volume"
+ - "'volume_type' in changed_gp3_volume"
+ - changed_gp3_volume.volume_type == 'gp3'
+ - "'iops' in changed_gp3_volume.volume"
+ - changed_gp3_volume.volume.iops == 3000
+ - "'throughput' in changed_gp3_volume.volume"
+ - "'tags' in changed_gp3_volume.volume"
+ - (changed_gp3_volume.volume.tags | length) == 6
+ - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world ❤️'
+ - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World ❤️'
+ - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase ❤️'
+ - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case ❤️'
+ - new_vol_attach_result.volume.tags["ResourcePrefix"] == resource_prefix
+ - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
+
+ - name: re-read volume information to validate new volume_type
+ ec2_vol_info:
+ filters:
+ volume-id: "{{ changed_gp3_volume.volume_id }}"
+ register: verify_gp3_change
+
+ - name: volume type must be gp3
+ assert:
+ that:
+ - v.type == 'gp3'
+ vars:
+ v: "{{ verify_gp3_change.volumes[0] }}"
+
+ - name: detach volume from the instance (check_mode)
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ instance: ""
+ check_mode: true
+ register: new_vol_attach_result_check_mode
+
+ - assert:
+ that:
+ - new_vol_attach_result_check_mode is changed
+
+ - name: detach volume from the instance
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ instance: ""
+ register: new_vol_attach_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - new_vol_attach_result.changed
+ - new_vol_attach_result.volume.status == 'available'
+
+ - name: detach volume from the instance (idempotent - check_mode)
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ instance: ""
+ register: new_vol_attach_result_idem_check_mode
+
+ - name: check task return attributes
+ assert:
+ that:
+ - not new_vol_attach_result_idem_check_mode.changed
+
+ - name: detach volume from the instance (idempotent)
+ ec2_vol:
+ id: "{{ new_vol_attach_result.volume_id }}"
+ instance: ""
+ register: new_vol_attach_result_idem
+
+ - name: check task return attributes
+ assert:
+ that:
+ - not new_vol_attach_result_idem.changed
+
+ - name: delete volume (check_mode)
+ ec2_vol:
+ id: "{{ volume2.volume_id }}"
+ state: absent
+ check_mode: true
+ register: delete_volume_result_check_mode
+
+ - assert:
+ that:
+ - delete_volume_result_check_mode is changed
+
+ - name: delete volume
+ ec2_vol:
+ id: "{{ volume2.volume_id }}"
+ state: absent
+ register: delete_volume_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - "delete_volume_result.changed"
+
+ - name: delete volume (idempotent - check_mode)
+ ec2_vol:
+ id: "{{ volume2.volume_id }}"
+ state: absent
+ check_mode: true
+ register: delete_volume_result_check_mode
+
+ - assert:
+ that:
+ - delete_volume_result_check_mode is not changed
+
+ - name: delete volume (idempotent)
+ ec2_vol:
+ id: "{{ volume2.volume_id }}"
+ state: absent
+ register: delete_volume_result_idem
+
+ - name: check task return attributes
+ assert:
+ that:
+ - not delete_volume_result_idem.changed
+ - '"Volume {{ volume2.volume_id }} does not exist" in delete_volume_result_idem.msg'
+
+ # Originally from ec2_vol_info
+
+ - name: Create test volume with Destroy on Terminate
+ ec2_vol:
+ instance: "{{ test_instance.instance_ids[0] }}"
+ volume_size: 4
+ name: "{{ resource_prefix }}_delete_on_terminate"
+ device_name: /dev/sdj
+ volume_type: io1
+ iops: 100
+ tags:
+ Tag Name with Space-and-dash: Tag Value with Space-and-dash
+ delete_on_termination: yes
+ register: dot_volume
+
+ - name: check task return attributes
+ assert:
+ that:
+ - dot_volume.changed
+ - "'attachment_set' in dot_volume.volume"
+ - "'delete_on_termination' in dot_volume.volume.attachment_set[0]"
+ - "'create_time' in dot_volume.volume"
+ - "'id' in dot_volume.volume"
+ - "'size' in dot_volume.volume"
+ - dot_volume.volume.size == 4
+ - "'volume_type' in dot_volume"
+ - dot_volume.volume_type == 'io1'
+ - "'iops' in dot_volume.volume"
+ - dot_volume.volume.iops == 100
+ - "'tags' in dot_volume.volume"
+ - (dot_volume.volume.tags | length ) == 2
+ - dot_volume.volume.tags["Name"] == "{{ resource_prefix }}_delete_on_terminate"
+ - dot_volume.volume.tags["Tag Name with Space-and-dash"] == 'Tag Value with Space-and-dash'
+
+ - name: Gather volume info without any filters
+ ec2_vol_info:
+ register: volume_info_wo_filters
+ check_mode: no
+
+ - name: Check if info are returned without filters
+ assert:
+ that:
+ - "volume_info_wo_filters.volumes is defined"
+
+ - name: Gather volume info
+ ec2_vol_info:
+ filters:
+ "tag:Name": "{{ resource_prefix }}_delete_on_terminate"
+ register: volume_info
+ check_mode: no
+
+ - name: Format check
+ assert:
+ that:
+ - "volume_info.volumes|length == 1"
+ - "v.attachment_set[0].attach_time is defined"
+ - "v.attachment_set[0].device is defined and v.attachment_set[0].device == dot_volume.device"
+ - "v.attachment_set[0].instance_id is defined and v.attachment_set[0].instance_id == test_instance.instance_ids[0]"
+ - "v.attachment_set[0].status is defined and v.attachment_set[0].status == 'attached'"
+ - "v.create_time is defined"
+ - "v.encrypted is defined and v.encrypted == false"
+ - "v.id is defined and v.id == dot_volume.volume_id"
+ - "v.iops is defined and v.iops == 100"
+ - "v.region is defined and v.region == aws_region"
+ - "v.size is defined and v.size == 4"
+ - "v.snapshot_id is defined and v.snapshot_id == ''"
+ - "v.status is defined and v.status == 'in-use'"
+ - "v.tags.Name is defined and v.tags.Name == resource_prefix + '_delete_on_terminate'"
+ - "v.tags['Tag Name with Space-and-dash'] == 'Tag Value with Space-and-dash'"
+ - "v.type is defined and v.type == 'io1'"
+ - "v.zone is defined and v.zone == test_instance.instances[0].placement.availability_zone"
+ vars:
+ v: "{{ volume_info.volumes[0] }}"
+
+ - name: New format check
+ assert:
+ that:
+ - "v.attachment_set[0].delete_on_termination is defined"
+ vars:
+ v: "{{ volume_info.volumes[0] }}"
+ when: ansible_version.full is version('2.7', '>=')
+
+ - name: test create a new gp3 volume
+ vars:
+ ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"
+ ec2_vol:
+ volume_size: 70
+ zone: "{{ availability_zone }}"
+ volume_type: gp3
+ throughput: 130
+ iops: 3001
+ name: "GP3-TEST-{{ resource_prefix }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: gp3_volume
+
+ - name: check that volume_type is gp3
+ assert:
+ that:
+ - gp3_volume.changed
+ - "'attachment_set' in gp3_volume.volume"
+ - "'create_time' in gp3_volume.volume"
+ - "'id' in gp3_volume.volume"
+ - "'size' in gp3_volume.volume"
+ - gp3_volume.volume.size == 70
+ - "'volume_type' in gp3_volume"
+ - gp3_volume.volume_type == 'gp3'
+ - "'iops' in gp3_volume.volume"
+ - gp3_volume.volume.iops == 3001
+ - "'throughput' in gp3_volume.volume"
+ - gp3_volume.volume.throughput == 130
+ - "'tags' in gp3_volume.volume"
+ - (gp3_volume.volume.tags | length ) == 2
+ - gp3_volume.volume.tags["ResourcePrefix"] == "{{ resource_prefix }}"
+
+ - name: Read volume information to validate throughput
+ vars:
+ ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"
+ ec2_vol_info:
+ filters:
+ volume-id: "{{ gp3_volume.volume_id }}"
+ register: verify_throughput
+
+ - name: throughput must be equal to 130
+ assert:
+ that:
+ - v.throughput == 130
+ vars:
+ v: "{{ verify_throughput.volumes[0] }}"
+
+ - name: print out facts
+ debug:
+ var: vol_facts
+
+ - name: Read volume information to validate throughput
+ vars:
+ ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"
+ ec2_vol_info:
+ filters:
+ volume-id: "{{ gp3_volume.volume_id }}"
+ register: verify_throughput
+
+ - name: throughput must be equal to 130
+ assert:
+ that:
+ - v.throughput == 130
+ vars:
+ v: "{{ verify_throughput.volumes[0] }}"
+
+ - name: print out facts
+ debug:
+ var: vol_facts
+
+ - name: increase throughput
+ vars:
+ ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"
+ ec2_vol:
+ volume_size: 70
+ zone: "{{ availability_zone }}"
+ volume_type: gp3
+ throughput: 131
+ modify_volume: yes
+ name: "GP3-TEST-{{ resource_prefix }}"
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: gp3_volume
+
+ - name: check that throughput has changed
+ assert:
+ that:
+ - gp3_volume.changed
+ - "'create_time' in gp3_volume.volume"
+ - "'id' in gp3_volume.volume"
+ - "'size' in gp3_volume.volume"
+ - gp3_volume.volume.size == 70
+ - "'volume_type' in gp3_volume"
+ - gp3_volume.volume_type == 'gp3'
+ - "'iops' in gp3_volume.volume"
+ - gp3_volume.volume.iops == 3001
+ - "'throughput' in gp3_volume.volume"
+ - gp3_volume.volume.throughput == 131
+
+ # Multi-Attach disk
+ - name: create disk with multi-attach enabled
+ ec2_vol:
+ volume_size: 4
+ volume_type: io1
+ iops: 102
+ zone: "{{ availability_zone }}"
+ multi_attach: yes
+ tags:
+ ResourcePrefix: "{{ resource_prefix }}"
+ register: multi_attach_disk
+
+ - name: check volume creation
+ assert:
+ that:
+ - multi_attach_disk.changed
+ - "'volume' in multi_attach_disk"
+ - multi_attach_disk.volume.multi_attach_enabled
+
+ - name: attach existing volume to an instance
+ ec2_vol:
+ id: "{{ multi_attach_disk.volume_id }}"
+ instance: "{{ test_instance.instance_ids[0] }}"
+ device_name: /dev/sdk
+ delete_on_termination: no
+ register: vol_attach_result
+
+ - name: Wait for instance to start
+ ec2_instance:
+ name: "{{ instance_name }}-2"
+ state: running
+ image_id: "{{ ec2_ami_id }}"
+ wait: True
+
+ - name: attach existing volume to second instance
+ ec2_vol:
+ id: "{{ multi_attach_disk.volume_id }}"
+ instance: "{{ test_instance_2.instance_ids[0] }}"
+ device_name: /dev/sdg
+ delete_on_termination: no
+ register: vol_attach_result
+
+ - name: check task return attributes
+ assert:
+ that:
+ - vol_attach_result.changed
+ - "'volume' in vol_attach_result"
+ - vol_attach_result.volume.attachment_set | length == 2
+ - 'test_instance.instance_ids[0] in vol_attach_result.volume.attachment_set | map(attribute="instance_id") | list'
+ - 'test_instance_2.instance_ids[0] in vol_attach_result.volume.attachment_set | map(attribute="instance_id") | list'
+
+ # ==== Cleanup ============================================================
+
+ always:
+ - name: Describe the instance before we delete it
+ ec2_instance_info:
+ instance_ids:
+ - "{{ item }}"
+ ignore_errors: yes
+ with_items:
+ - "{{ test_instance.instance_ids[0] }}"
+ - "{{ test_instance_2.instance_ids[0] }}"
+ register: pre_delete
+
+ - debug:
+ var: pre_delete
+
+ - name: delete test instance
+ ec2_instance:
+ instance_ids:
+ - "{{ item }}"
+ state: terminated
+ wait: True
+ with_items:
+ - "{{ test_instance.instance_ids[0] }}"
+ - "{{ test_instance_2.instance_ids[0] }}"
+ ignore_errors: yes
+
+ - name: delete volumes
+ ec2_vol:
+ id: "{{ item.volume_id }}"
+ state: absent
+ ignore_errors: yes
+ with_items:
+ - "{{ volume1 }}"
+ - "{{ volume2 }}"
+ - "{{ volume3 }}"
+ - "{{ new_vol_attach_result }}"
+ - "{{ attach_new_vol_from_snapshot_result }}"
+ - "{{ dot_volume }}"
+ - "{{ gp3_volume }}"
+ - "{{ multi_attach_disk }}"
+
+ - name: delete snapshot
+ ec2_snapshot:
+ snapshot_id: "{{ vol1_snapshot.snapshot_id }}"
+ state: absent
+ ignore_errors: yes
+
+ - name: delete test subnet
+ ec2_vpc_subnet:
+ vpc_id: "{{ testing_vpc.vpc.id }}"
+ cidr: "{{ subnet_cidr }}"
+ state: absent
+ ignore_errors: yes
+
+ - name: delete test VPC
+ ec2_vpc_net:
+ name: "{{ vpc_name }}"
+ cidr_block: "{{ vpc_cidr }}"
+ state: absent
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/tests.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/tests.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/tests.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vol/tasks/tests.yml 1970-01-01 00:00:00.000000000 +0000
@@ -1,686 +0,0 @@
----
-
-- module_defaults:
- group/aws:
- aws_access_key: '{{ aws_access_key | default(omit) }}'
- aws_secret_key: '{{ aws_secret_key | default(omit) }}'
- security_token: '{{ security_token | default(omit) }}'
- region: '{{ aws_region | default(omit) }}'
-
- collections:
- - amazon.aws
- - community.aws
-
- block:
- - name: list available AZs
- aws_az_info:
- register: region_azs
-
- - name: pick an AZ for testing
- set_fact:
- availability_zone: "{{ region_azs.availability_zones[0].zone_name }}"
-
- - name: Create a test VPC
- ec2_vpc_net:
- name: "{{ vpc_name }}"
- cidr_block: "{{ vpc_cidr }}"
- tags:
- Name: ec2_vol testing
- ResourcePrefix: "{{ resource_prefix }}"
- register: testing_vpc
-
- - name: Create a test subnet
- ec2_vpc_subnet:
- vpc_id: "{{ testing_vpc.vpc.id }}"
- cidr: "{{ subnet_cidr }}"
- tags:
- Name: ec2_vol testing
- ResourcePrefix: "{{ resource_prefix }}"
- az: '{{ availability_zone }}'
- register: testing_subnet
-
- - name: Find AMI to use
- ec2_ami_info:
- owners: 'amazon'
- filters:
- name: '{{ ec2_ami_name }}'
- register: ec2_amis
-
- - name: Set fact with latest AMI
- vars:
- latest_ami: '{{ ec2_amis.images | sort(attribute="creation_date") | last }}'
- set_fact:
- ec2_ami_image: '{{ latest_ami.image_id }}'
-
- # ==== ec2_vol tests ===============================================
-
- - name: create a volume (validate module defaults)
- ec2_vol:
- volume_size: 1
- zone: "{{ availability_zone }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: volume1
-
- - name: check task return attributes
- assert:
- that:
- - volume1.changed
- - "'volume' in volume1"
- - "'volume_id' in volume1"
- - "'volume_type' in volume1"
- - "'device' in volume1"
- - volume1.volume.status == 'available'
- - volume1.volume_type == 'standard'
- - "'attachment_set' in volume1.volume"
- - "'instance_id' in volume1.volume.attachment_set"
- - not volume1.volume.attachment_set.instance_id
- - not ("Name" in volume1.volume.tags)
- - not volume1.volume.encrypted
- - volume1.volume.tags.ResourcePrefix == "{{ resource_prefix }}"
-
- # no idempotency check needed here
-
- - name: create another volume (override module defaults)
- ec2_vol:
- encrypted: yes
- volume_size: 4
- volume_type: io1
- iops: 101
- name: "{{ resource_prefix }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- zone: "{{ availability_zone }}"
- register: volume2
-
- - name: check task return attributes
- assert:
- that:
- - volume2.changed
- - "'volume' in volume2"
- - "'volume_id' in volume2"
- - "'volume_type' in volume2"
- - "'device' in volume2"
- - volume2.volume.status == 'available'
- - volume2.volume_type == 'io1'
- - volume2.volume.iops == 101
- - volume2.volume.size == 4
- - volume2.volume.tags.Name == "{{ resource_prefix }}"
- - volume2.volume.encrypted
- - volume2.volume.tags.ResourcePrefix == "{{ resource_prefix }}"
-
- - name: create another volume (override module defaults) (idempotent)
- ec2_vol:
- encrypted: yes
- volume_size: 4
- volume_type: io1
- iops: 101
- name: "{{ resource_prefix }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- zone: "{{ availability_zone }}"
- register: volume2_idem
-
- - name: check task return attributes
- assert:
- that:
- - not volume2_idem.changed
-
- - name: create snapshot from volume
- ec2_snapshot:
- volume_id: "{{ volume1.volume_id }}"
- description: "Resource Prefix - {{ resource_prefix }}"
- snapshot_tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: vol1_snapshot
-
- - name: check task return attributes
- assert:
- that:
- - vol1_snapshot.changed
-
- - name: create a volume from a snapshot
- ec2_vol:
- snapshot: "{{ vol1_snapshot.snapshot_id }}"
- encrypted: yes
- volume_type: gp2
- volume_size: 1
- zone: "{{ availability_zone }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: volume3
-
- - name: check task return attributes
- assert:
- that:
- - volume3.changed
- - "volume3.volume.snapshot_id == vol1_snapshot.snapshot_id"
-
- - name: create an ec2 instance
- ec2_instance:
- name: "{{ resource_prefix }}"
- vpc_subnet_id: "{{ testing_subnet.subnet.id }}"
- instance_type: t3.nano
- image_id: "{{ ec2_ami_image }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: test_instance
-
- - name: check task return attributes
- assert:
- that:
- - test_instance.changed
-
- - name: attach existing volume to an instance
- ec2_vol:
- id: "{{ volume1.volume_id }}"
- instance: "{{ test_instance.instance_ids[0] }}"
- device_name: /dev/sdg
- delete_on_termination: no
- register: vol_attach_result
-
- - name: check task return attributes
- assert:
- that:
- - vol_attach_result.changed
- - "'device' in vol_attach_result and vol_attach_result.device == '/dev/sdg'"
- - "'volume' in vol_attach_result"
- - vol_attach_result.volume.attachment_set.status in ['attached', 'attaching']
- - vol_attach_result.volume.attachment_set.instance_id == test_instance.instance_ids[0]
- - vol_attach_result.volume.attachment_set.device == '/dev/sdg'
-
-# Failing
-# - "vol_attach_result.volume.attachment_set.deleteOnTermination"
-
- - name: attach existing volume to an instance (idempotent)
- ec2_vol:
- id: "{{ volume1.volume_id }}"
- instance: "{{ test_instance.instance_ids[0] }}"
- device_name: /dev/sdg
- delete_on_termination: no
- register: vol_attach_result
-
- - name: check task return attributes
- assert:
- that:
- - "not vol_attach_result.changed"
- - vol_attach_result.volume.attachment_set.status in ['attached', 'attaching']
-
- - name: attach a new volume to an instance
- ec2_vol:
- instance: "{{ test_instance.instance_ids[0] }}"
- device_name: /dev/sdh
- volume_size: 1
- volume_type: gp2
- name: '{{ resource_prefix }} - sdh'
- tags:
- "lowercase spaced": 'hello cruel world'
- "Title Case": 'Hello Cruel World'
- CamelCase: 'SimpleCamelCase'
- snake_case: 'simple_snake_case'
- ResourcePrefix: "{{ resource_prefix }}"
- register: new_vol_attach_result
-
- - name: check task return attributes
- assert:
- that:
- - new_vol_attach_result.changed
- - "'device' in new_vol_attach_result and new_vol_attach_result.device == '/dev/sdh'"
- - "'volume' in new_vol_attach_result"
- - new_vol_attach_result.volume.attachment_set.status in ['attached', 'attaching']
- - new_vol_attach_result.volume.attachment_set.instance_id == test_instance.instance_ids[0]
- - new_vol_attach_result.volume.attachment_set.device == '/dev/sdh'
- - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world'
- - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World'
- - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase'
- - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case'
- - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
-
-
- - name: attach a new volume to an instance (idempotent)
- ec2_vol:
- instance: "{{ test_instance.instance_ids[0] }}"
- device_name: /dev/sdh
- volume_size: 1
- volume_type: gp2
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: new_vol_attach_result_idem
- ignore_errors: true
-
- - name: check task return attributes
- assert:
- that:
- - "not new_vol_attach_result_idem.changed"
- - "'Volume mapping for /dev/sdh already exists' in new_vol_attach_result_idem.msg"
-
- - name: change some tag values
- ec2_vol:
- instance: "{{ test_instance.instance_ids[0] }}"
- id: "{{ new_vol_attach_result.volume.id }}"
- device_name: /dev/sdh
- volume_size: 1
- volume_type: gp2
- tags:
- "lowercase spaced": 'hello cruel world ❤️'
- "Title Case": 'Hello Cruel World ❤️'
- CamelCase: 'SimpleCamelCase ❤️'
- snake_case: 'simple_snake_case ❤️'
- register: new_vol_attach_result
-
- - name: check task return attributes
- assert:
- that:
- - new_vol_attach_result.changed
- - "'volume_id' in new_vol_attach_result"
- - new_vol_attach_result.volume_id == "{{ new_vol_attach_result.volume_id }}"
- - "'attachment_set' in new_vol_attach_result.volume"
- - "'create_time' in new_vol_attach_result.volume"
- - "'id' in new_vol_attach_result.volume"
- - "'size' in new_vol_attach_result.volume"
- - new_vol_attach_result.volume.size == 1
- - "'volume_type' in new_vol_attach_result"
- - new_vol_attach_result.volume_type == 'gp2'
- - "'tags' in new_vol_attach_result.volume"
- - (new_vol_attach_result.volume.tags | length) == 6
- - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world ❤️'
- - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World ❤️'
- - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase ❤️'
- - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case ❤️'
- - new_vol_attach_result.volume.tags["ResourcePrefix"] == resource_prefix
- - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
-
-
- - name: create a volume from a snapshot and attach to the instance
- ec2_vol:
- instance: "{{ test_instance.instance_ids[0] }}"
- device_name: /dev/sdi
- snapshot: "{{ vol1_snapshot.snapshot_id }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: attach_new_vol_from_snapshot_result
-
- - name: check task return attributes
- assert:
- that:
- - attach_new_vol_from_snapshot_result.changed
- - "'device' in attach_new_vol_from_snapshot_result and attach_new_vol_from_snapshot_result.device == '/dev/sdi'"
- - "'volume' in attach_new_vol_from_snapshot_result"
- - attach_new_vol_from_snapshot_result.volume.attachment_set.status in ['attached', 'attaching']
- - attach_new_vol_from_snapshot_result.volume.attachment_set.instance_id == test_instance.instance_ids[0]
-
- - name: list volumes attached to instance
- ec2_vol:
- instance: "{{ test_instance.instance_ids[0] }}"
- state: list
- register: inst_vols
-
- - name: check task return attributes
- assert:
- that:
- - not inst_vols.changed
- - "'volumes' in inst_vols"
- - inst_vols.volumes | length == 4
-
- - name: get info on ebs volumes
- ec2_vol_info:
- register: ec2_vol_info
-
- - name: check task return attributes
- assert:
- that:
- - not ec2_vol_info.failed
-
- - name: get info on ebs volumes
- ec2_vol_info:
- filters:
- attachment.instance-id: "{{ test_instance.instance_ids[0] }}"
- register: ec2_vol_info
-
- - name: check task return attributes
- assert:
- that:
- - ec2_vol_info.volumes | length == 4
-
- - name: must not change because of missing parameter modify_volume
- ec2_vol:
- id: "{{ new_vol_attach_result.volume_id }}"
- zone: "{{ availability_zone }}"
- volume_type: gp3
- register: changed_gp3_volume
-
- - name: volume must not changed
- assert:
- that:
- - not changed_gp3_volume.changed
-
- - name: change existing volume to gp3
- ec2_vol:
- id: "{{ new_vol_attach_result.volume_id }}"
- zone: "{{ availability_zone }}"
- volume_type: gp3
- modify_volume: yes
- register: changed_gp3_volume
-
- - name: check that volume_type has changed
- assert:
- that:
- - changed_gp3_volume.changed
- - "'volume_id' in changed_gp3_volume"
- - changed_gp3_volume.volume_id == "{{ new_vol_attach_result.volume_id }}"
- - "'attachment_set' in changed_gp3_volume.volume"
- - "'create_time' in changed_gp3_volume.volume"
- - "'id' in changed_gp3_volume.volume"
- - "'size' in changed_gp3_volume.volume"
- - "'volume_type' in changed_gp3_volume"
- - changed_gp3_volume.volume_type == 'gp3'
- - "'iops' in changed_gp3_volume.volume"
- - changed_gp3_volume.volume.iops == 3000
- # Ensure our tags are still here
- - "'tags' in changed_gp3_volume.volume"
- - (changed_gp3_volume.volume.tags | length) == 6
- - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world ❤️'
- - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World ❤️'
- - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase ❤️'
- - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case ❤️'
- - new_vol_attach_result.volume.tags["ResourcePrefix"] == resource_prefix
- - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
-
- - name: volume must be from type gp3 (idempotent)
- ec2_vol:
- id: "{{ new_vol_attach_result.volume_id }}"
- zone: "{{ availability_zone }}"
- volume_type: gp3
- modify_volume: yes
- register: changed_gp3_volume
- retries: 3
- delay: 3
- until: not changed_gp3_volume.failed
- # retry because ebs change is to slow
-
- - name: must not changed (idempotent)
- assert:
- that:
- - not changed_gp3_volume.changed
- - "'volume_id' in changed_gp3_volume"
- - changed_gp3_volume.volume_id == "{{ new_vol_attach_result.volume_id }}"
- - "'attachment_set' in changed_gp3_volume.volume"
- - "'create_time' in changed_gp3_volume.volume"
- - "'id' in changed_gp3_volume.volume"
- - "'size' in changed_gp3_volume.volume"
- - "'volume_type' in changed_gp3_volume"
- - changed_gp3_volume.volume_type == 'gp3'
- - "'iops' in changed_gp3_volume.volume"
- - changed_gp3_volume.volume.iops == 3000
- - "'throughput' in changed_gp3_volume.volume"
- - "'tags' in changed_gp3_volume.volume"
- - (changed_gp3_volume.volume.tags | length) == 6
- - new_vol_attach_result.volume.tags["lowercase spaced"] == 'hello cruel world ❤️'
- - new_vol_attach_result.volume.tags["Title Case"] == 'Hello Cruel World ❤️'
- - new_vol_attach_result.volume.tags["CamelCase"] == 'SimpleCamelCase ❤️'
- - new_vol_attach_result.volume.tags["snake_case"] == 'simple_snake_case ❤️'
- - new_vol_attach_result.volume.tags["ResourcePrefix"] == resource_prefix
- - new_vol_attach_result.volume.tags["Name"] == '{{ resource_prefix }} - sdh'
-
- - name: re-read volume information to validate new volume_type
- ec2_vol_info:
- filters:
- volume-id: "{{ changed_gp3_volume.volume_id }}"
- register: verify_gp3_change
-
- - name: volume type must be gp3
- assert:
- that:
- - v.type == 'gp3'
- vars:
- v: "{{ verify_gp3_change.volumes[0] }}"
-
- - name: detach volume from the instance
- ec2_vol:
- id: "{{ new_vol_attach_result.volume_id }}"
- instance: ""
- register: new_vol_attach_result
-
- - name: check task return attributes
- assert:
- that:
- - new_vol_attach_result.changed
- - new_vol_attach_result.volume.status == 'available'
-
- - name: detach volume from the instance (idempotent)
- ec2_vol:
- id: "{{ new_vol_attach_result.volume_id }}"
- instance: ""
- register: new_vol_attach_result_idem
-
- - name: check task return attributes
- assert:
- that:
- - not new_vol_attach_result_idem.changed
-
- - name: delete volume
- ec2_vol:
- id: "{{ volume2.volume_id }}"
- state: absent
- register: delete_volume_result
-
- - name: check task return attributes
- assert:
- that:
- - "delete_volume_result.changed"
-
- - name: delete volume (idempotent)
- ec2_vol:
- id: "{{ volume2.volume_id }}"
- state: absent
- register: delete_volume_result_idem
-
- - name: check task return attributes
- assert:
- that:
- - not delete_volume_result_idem.changed
- - '"Volume {{ volume2.volume_id }} does not exist" in delete_volume_result_idem.msg'
-
- # Originally from ec2_vol_info
-
- - name: Create test volume with Destroy on Terminate
- ec2_vol:
- instance: "{{ test_instance.instance_ids[0] }}"
- volume_size: 4
- name: "{{ resource_prefix }}_delete_on_terminate"
- device_name: /dev/sdj
- volume_type: io1
- iops: 100
- tags:
- Tag Name with Space-and-dash: Tag Value with Space-and-dash
- delete_on_termination: yes
- register: dot_volume
-
- - name: check task return attributes
- assert:
- that:
- - dot_volume.changed
- - "'attachment_set' in dot_volume.volume"
- - "'deleteOnTermination' in dot_volume.volume.attachment_set"
- - "dot_volume.volume.attachment_set.deleteOnTermination is defined"
- - "'create_time' in dot_volume.volume"
- - "'id' in dot_volume.volume"
- - "'size' in dot_volume.volume"
- - dot_volume.volume.size == 4
- - "'volume_type' in dot_volume"
- - dot_volume.volume_type == 'io1'
- - "'iops' in dot_volume.volume"
- - dot_volume.volume.iops == 100
- - "'tags' in dot_volume.volume"
- - (dot_volume.volume.tags | length ) == 2
- - dot_volume.volume.tags["Name"] == "{{ resource_prefix }}_delete_on_terminate"
- - dot_volume.volume.tags["Tag Name with Space-and-dash"] == 'Tag Value with Space-and-dash'
-
- - name: Gather volume info without any filters
- ec2_vol_info:
- register: volume_info_wo_filters
- check_mode: no
-
- - name: Check if info are returned without filters
- assert:
- that:
- - "volume_info_wo_filters.volumes is defined"
-
- - name: Gather volume info
- ec2_vol_info:
- filters:
- "tag:Name": "{{ resource_prefix }}_delete_on_terminate"
- register: volume_info
- check_mode: no
-
- - name: Format check
- assert:
- that:
- - "volume_info.volumes|length == 1"
- - "v.attachment_set.attach_time is defined"
- - "v.attachment_set.device is defined and v.attachment_set.device == dot_volume.device"
- - "v.attachment_set.instance_id is defined and v.attachment_set.instance_id == test_instance.instance_ids[0]"
- - "v.attachment_set.status is defined and v.attachment_set.status == 'attached'"
- - "v.create_time is defined"
- - "v.encrypted is defined and v.encrypted == false"
- - "v.id is defined and v.id == dot_volume.volume_id"
- - "v.iops is defined and v.iops == 100"
- - "v.region is defined and v.region == aws_region"
- - "v.size is defined and v.size == 4"
- - "v.snapshot_id is defined and v.snapshot_id == ''"
- - "v.status is defined and v.status == 'in-use'"
- - "v.tags.Name is defined and v.tags.Name == resource_prefix + '_delete_on_terminate'"
- - "v.tags['Tag Name with Space-and-dash'] == 'Tag Value with Space-and-dash'"
- - "v.type is defined and v.type == 'io1'"
- - "v.zone is defined and v.zone == test_instance.instances[0].placement.availability_zone"
- vars:
- v: "{{ volume_info.volumes[0] }}"
-
- - name: New format check
- assert:
- that:
- - "v.attachment_set.delete_on_termination is defined"
- vars:
- v: "{{ volume_info.volumes[0] }}"
- when: ansible_version.full is version('2.7', '>=')
-
- - name: test create a new gp3 volume
- ec2_vol:
- volume_size: 7
- zone: "{{ availability_zone }}"
- volume_type: gp3
- throughput: 130
- iops: 3001
- name: "GP3-TEST-{{ resource_prefix }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: gp3_volume
-
- - name: check that volume_type is gp3
- assert:
- that:
- - gp3_volume.changed
- - "'attachment_set' in gp3_volume.volume"
- - "'deleteOnTermination' in gp3_volume.volume.attachment_set"
- - gp3_volume.volume.attachment_set.deleteOnTermination == none
- - "'create_time' in gp3_volume.volume"
- - "'id' in gp3_volume.volume"
- - "'size' in gp3_volume.volume"
- - gp3_volume.volume.size == 7
- - "'volume_type' in gp3_volume"
- - gp3_volume.volume_type == 'gp3'
- - "'iops' in gp3_volume.volume"
- - gp3_volume.volume.iops == 3001
- - "'throughput' in gp3_volume.volume"
- - gp3_volume.volume.throughput == 130
- - "'tags' in gp3_volume.volume"
- - (gp3_volume.volume.tags | length ) == 2
- - gp3_volume.volume.tags["ResourcePrefix"] == "{{ resource_prefix }}"
-
- - name: increase throughput
- ec2_vol:
- volume_size: 7
- zone: "{{ availability_zone }}"
- volume_type: gp3
- throughput: 131
- modify_volume: yes
- name: "GP3-TEST-{{ resource_prefix }}"
- tags:
- ResourcePrefix: "{{ resource_prefix }}"
- register: gp3_volume
-
- - name: check that throughput has changed
- assert:
- that:
- - gp3_volume.changed
- - "'attachment_set' in gp3_volume.volume"
- - "'deleteOnTermination' in gp3_volume.volume.attachment_set"
- - gp3_volume.volume.attachment_set.deleteOnTermination == none
- - "'create_time' in gp3_volume.volume"
- - "'id' in gp3_volume.volume"
- - "'size' in gp3_volume.volume"
- - gp3_volume.volume.size == 7
- - "'volume_type' in gp3_volume"
- - gp3_volume.volume_type == 'gp3'
- - "'iops' in gp3_volume.volume"
- - gp3_volume.volume.iops == 3001
- - "'throughput' in gp3_volume.volume"
- - gp3_volume.volume.throughput == 131
- - "'tags' in gp3_volume.volume"
- - (gp3_volume.volume.tags | length ) == 2
- - gp3_volume.volume.tags["ResourcePrefix"] == "{{ resource_prefix }}"
-
-
- # ==== Cleanup ============================================================
-
- always:
- - name: Describe the instance before we delete it
- ec2_instance_info:
- instance_ids:
- - "{{ test_instance.instance_ids[0] }}"
- ignore_errors: yes
- register: pre_delete
-
- - debug:
- var: pre_delete
-
- - name: delete test instance
- ec2_instance:
- instance_ids:
- - "{{ test_instance.instance_ids[0] }}"
- state: terminated
- ignore_errors: yes
-
- - name: delete volumes
- ec2_vol:
- id: "{{ item.volume_id }}"
- state: absent
- ignore_errors: yes
- with_items:
- - "{{ volume1 }}"
- - "{{ volume2 }}"
- - "{{ volume3 }}"
- - "{{ new_vol_attach_result }}"
- - "{{ attach_new_vol_from_snapshot_result }}"
- - "{{ dot_volume }}"
- - "{{ gp3_volume }}"
-
- - name: delete snapshot
- ec2_snapshot:
- snapshot_id: "{{ vol1_snapshot.snapshot_id }}"
- state: absent
- ignore_errors: yes
-
- - name: delete test subnet
- ec2_vpc_subnet:
- vpc_id: "{{ testing_vpc.vpc.id }}"
- cidr: "{{ subnet_cidr }}"
- state: absent
- ignore_errors: yes
-
- - name: delete test VPC
- ec2_vpc_net:
- name: "{{ vpc_name }}"
- cidr_block: "{{ vpc_cidr }}"
- state: absent
- ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group1
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_dhcp_option/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -2,12 +2,8 @@
# ============================================================
# Known issues:
#
-# `check_mode` throws a traceback when providing options
# there is no way to associate the `default` option set in the module
-# ec2_vpc_dhcp_option_info needs to use camel_dict_to_snake_dict(..., ignore_list=['Tags'])
-# Purging tags does nothing, but reports changed
# The module doesn't store/return tags in the new_options dictionary
-# Adding tags is silently ignored and no change is made
# always reassociated (changed=True) when vpc_id is provided without options
#
# ============================================================
@@ -18,9 +14,6 @@
security_token: "{{ security_token | default('') }}"
region: "{{ aws_region }}"
- collections:
- - community.general
-
block:
# DHCP option set can be attached to multiple VPCs, we don't want to use any that
@@ -30,7 +23,7 @@
register: result
- set_fact:
- preexisting_option_sets: "{{ result | community.general.json_query('dhcp_options[*].dhcp_options_id') | list }}"
+ preexisting_option_sets: "{{ result.dhcp_options | map(attribute='dhcp_options_id') | list }}"
- name: create a VPC with a default DHCP option set to test inheritance and delete_old
ec2_vpc_net:
@@ -71,7 +64,9 @@
ec2_vpc_dhcp_option:
state: present
vpc_id: "{{ vpc_id }}"
+ purge_tags: False
dhcp_options_id: "{{ new_dhcp_options.dhcp_options_id }}"
+## ============================================
- name: find the VPC's associated option set
ec2_vpc_net_info:
@@ -93,7 +88,7 @@
that:
- original_dhcp_options_info.dhcp_options | length == 1
- original_config.keys() | list | sort == ['domain-name', 'domain-name-servers']
- - original_config['domain-name'][0]['value'] == "{{ aws_domain_name }}"
+ - original_config['domain-name'][0]['value'] == '{{ aws_domain_name }}'
- original_config['domain-name-servers'][0]['value'] == 'AmazonProvidedDNS'
- original_dhcp_options_id not in preexisting_option_sets
@@ -105,14 +100,17 @@
ec2_vpc_dhcp_option:
state: present
vpc_id: "{{ vpc_id }}"
+ domain_name: "{{ aws_domain_name }}"
+ dns_servers:
+ - AmazonProvidedDNS
+ tags:
+ Name: "{{ resource_prefix }}"
register: found_dhcp_options
check_mode: true
- assert:
that:
- # FIXME: options have to be provided to match the option associated with the VPC
- not found_dhcp_options.changed
- - not found_dhcp_options.new_options
# FIXME: always reassociated when vpc_id is provided without options, so here we provide the default options
- name: test a DHCP option exists
@@ -130,30 +128,28 @@
that:
- found_dhcp_options is not changed
- found_dhcp_options.dhcp_options_id is defined
- - not found_dhcp_options.changed or dhcp_options is defined and dhcp_options.dhcp_options_id == found_dhcp_options.dhcp_options_id
+ - original_dhcp_options_id == found_dhcp_options.dhcp_options_id
# Create a DHCP option set that inherits from the default set and does not delete the old set
+ - name: create a DHCP option set that inherits from the default set (check mode)
+ ec2_vpc_dhcp_option:
+ state: present
+ vpc_id: "{{ vpc_id }}"
+ inherit_existing: True
+ ntp_servers:
+ - 10.0.0.2
+ - 10.0.1.2
+ netbios_name_servers:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios_node_type: 2
+ delete_old: False
+ register: dhcp_options
+ check_mode: true
- # FIXME: check mode causes a traceback
- #- name: create a DHCP option set that inherits from the default set (check mode)
- # ec2_vpc_dhcp_option:
- # state: present
- # vpc_id: "{{ vpc_id }}"
- # inherit_existing: True
- # ntp_servers:
- # - 10.0.0.2
- # - 10.0.1.2
- # netbios_name_servers:
- # - 10.0.0.1
- # - 10.0.1.1
- # netbios_node_type: 2
- # delete_old: False
- # register: dhcp_options
- # check_mode: true
-
- #- assert:
- # that:
- # - dhcp_options.changed
+ - assert:
+ that:
+ - dhcp_options.changed
- name: create a DHCP option set that inherits from the default set
ec2_vpc_dhcp_option:
@@ -170,6 +166,9 @@
delete_old: False
register: dhcp_options
+ - set_fact:
+ dhcp_options_config: "{{ dhcp_options.dhcp_options.dhcp_configurations | items2dict(key_name='key', value_name='values') }}"
+
- assert:
that:
- dhcp_options.changed
@@ -177,11 +176,15 @@
- dhcp_options.new_options.keys() | list | sort == ['domain-name', 'domain-name-servers', 'netbios-name-servers', 'netbios-node-type', 'ntp-servers']
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- # FIXME: module/aws randomly returns as a string or list
- - dhcp_options.new_options['netbios-node-type'] in ['2', ['2']]
- # found I think is false, because the options are different and no id is provided?
- - dhcp_options.new_options['domain-name'] in ["{{ aws_domain_name }}", ["{{ aws_domain_name }}"]]
- - dhcp_options.new_options['domain-name-servers'] in ['AmazonProvidedDNS', ['AmazonProvidedDNS']]
+ - dhcp_options.new_options['netbios-node-type'] == '2'
+ - dhcp_options.new_options['domain-name'] == ['{{ aws_domain_name }}']
+ - dhcp_options.new_options['domain-name-servers'] == ['AmazonProvidedDNS']
+ # We return the list of dicts that boto gives us, in addition to the user-friendly config dict
+ - dhcp_options_config['ntp-servers'] | map(attribute='value') | list | sort == ['10.0.0.2', '10.0.1.2']
+ - dhcp_options_config['netbios-name-servers'] | map(attribute='value') | list | sort == ['10.0.0.1', '10.0.1.1']
+ - dhcp_options_config['netbios-node-type'][0]['value'] == '2'
+ - dhcp_options_config['domain-name'][0]['value'] == '{{ aws_domain_name }}'
+ - dhcp_options_config['domain-name-servers'][0]['value'] == 'AmazonProvidedDNS'
- original_dhcp_options_id != dhcp_options.dhcp_options_id
- set_fact:
@@ -198,11 +201,18 @@
- assert:
that:
- new_config.keys() | list | sort == ['domain-name', 'domain-name-servers', 'netbios-name-servers', 'netbios-node-type', 'ntp-servers']
- - new_config['domain-name'][0]['value'] == "{{ aws_domain_name }}"
+ - new_config['domain-name'][0]['value'] == '{{ aws_domain_name }}'
- new_config['domain-name-servers'][0]['value'] == 'AmazonProvidedDNS'
- - new_config['ntp-servers'] | community.general.json_query('[*].value') | list | sort == ['10.0.0.2', '10.0.1.2']
- - new_config['netbios-name-servers'] | community.general.json_query('[*].value') | list | sort == ['10.0.0.1', '10.0.1.1']
- - new_config['netbios-node-type'][0]['value'] in ['2', ['2']]
+ - new_config['ntp-servers'] | map(attribute='value') | list | sort == ['10.0.0.2', '10.0.1.2']
+ - new_config['netbios-name-servers'] | map(attribute='value') | list | sort == ['10.0.0.1', '10.0.1.1']
+ - new_config['netbios-node-type'][0]['value'] == '2'
+ # We return the list of dicts that boto gives us, in addition to the user-friendly config dict
+ - new_dhcp_options.dhcp_config[0]['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
+ - new_dhcp_options.dhcp_config[0]['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
+ - new_dhcp_options.dhcp_config[0]['netbios-node-type'] == '2'
+ - new_dhcp_options.dhcp_config[0]['domain-name'] == ['{{ aws_domain_name }}']
+ - new_dhcp_options.dhcp_config[0]['domain-name-servers'] == ['AmazonProvidedDNS']
+
# FIXME: no way to associate `default` in the module
- name: Re-associate the default DHCP options set so that the new one can be deleted
@@ -225,26 +235,25 @@
# Create a DHCP option set that does not inherit from the old set and doesn't delete the old set
- # FIXME: check mode causes a traceback
- #- name: create a DHCP option set that does not inherit from the default set (check mode)
- # ec2_vpc_dhcp_option:
- # state: present
- # vpc_id: "{{ vpc_id }}"
- # inherit_existing: False
- # ntp_servers:
- # - 10.0.0.2
- # - 10.0.1.2
- # netbios_name_servers:
- # - 10.0.0.1
- # - 10.0.1.1
- # netbios_node_type: 2
- # delete_old: False
- # register: dhcp_options
- # check_mode: true
-
- #- assert:
- # that:
- # - dhcp_options.changed
+ - name: create a DHCP option set that does not inherit from the default set (check mode)
+ ec2_vpc_dhcp_option:
+ state: present
+ vpc_id: "{{ vpc_id }}"
+ inherit_existing: False
+ ntp_servers:
+ - 10.0.0.2
+ - 10.0.1.2
+ netbios_name_servers:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios_node_type: 2
+ delete_old: False
+ register: dhcp_options
+ check_mode: true
+
+ - assert:
+ that:
+ - dhcp_options.changed
- name: create a DHCP option set that does not inherit from the default set
ec2_vpc_dhcp_option:
@@ -261,18 +270,23 @@
delete_old: False
register: dhcp_options
+ - set_fact:
+ dhcp_options_config: "{{ dhcp_options.dhcp_options.dhcp_configurations | items2dict(key_name='key', value_name='values') }}"
+
- assert:
that:
- dhcp_options.changed
- dhcp_options.new_options
# FIXME extra keys are returned unpredictably
- - dhcp_options.new_options.keys() | list | sort is superset(['netbios-name-servers', 'netbios-node-type', 'ntp-servers'])
+ - dhcp_options.new_options.keys() | list | sort == ['netbios-name-servers', 'netbios-node-type', 'ntp-servers']
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- # found should be false, so listified - found does [0] assignment
-# - dhcp_options.new_options['netbios-node-type'] == ['2']
- - dhcp_options.new_options['netbios-node-type'] in ['2', ['2']]
+ - dhcp_options.new_options['netbios-node-type'] == '2'
- original_dhcp_options_id != dhcp_options.dhcp_options_id
+ # We return the list of dicts that boto gives us, in addition to the user-friendly config dict
+ - new_dhcp_options.dhcp_config[0]['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
+ - new_dhcp_options.dhcp_config[0]['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
+ - new_dhcp_options.dhcp_config[0]['netbios-node-type'] == '2'
- set_fact:
new_dhcp_options_id: "{{ dhcp_options.dhcp_options_id }}"
@@ -288,8 +302,8 @@
- assert:
that:
- new_config.keys() | list | sort == ['netbios-name-servers', 'netbios-node-type', 'ntp-servers']
- - new_config['ntp-servers'] | community.general.json_query('[*].value') | list | sort == ['10.0.0.2', '10.0.1.2']
- - new_config['netbios-name-servers'] | community.general.json_query('[*].value') | list | sort == ['10.0.0.1', '10.0.1.1']
+ - new_config['ntp-servers'] | map(attribute='value') | list | sort == ['10.0.0.2', '10.0.1.2']
+ - new_config['netbios-name-servers'] | map(attribute='value') | list | sort == ['10.0.0.1', '10.0.1.1']
- new_config['netbios-node-type'][0]['value'] == '2'
- name: disassociate the new DHCP option set so it can be deleted
@@ -304,30 +318,27 @@
state: absent
# Create a DHCP option set that inherits from the default set overwrites a default and deletes the old set
+ - name: create a DHCP option set that inherits from the default set and deletes the original set (check mode)
+ ec2_vpc_dhcp_option:
+ state: present
+ vpc_id: "{{ vpc_id }}"
+ inherit_existing: True
+ domain_name: us-west-2.compute.internal
+ ntp_servers:
+ - 10.0.0.2
+ - 10.0.1.2
+ netbios_name_servers:
+ - 10.0.0.1
+ - 10.0.1.1
+ netbios_node_type: 2
+ delete_old: True
+ register: dhcp_options
+ check_mode: true
- # FIXME: check mode traceback
- #- name: create a DHCP option set that inherits from the default set and deletes the original set (check mode)
- # ec2_vpc_dhcp_option:
- # state: present
- # vpc_id: "{{ vpc_id }}"
- # inherit_existing: True
- # domain_name: us-west-2.compute.internal
- # ntp_servers:
- # - 10.0.0.2
- # - 10.0.1.2
- # netbios_name_servers:
- # - 10.0.0.1
- # - 10.0.1.1
- # netbios_node_type: 2
- # delete_old: True
- # register: dhcp_options
- # check_mode: true
-
- #- assert:
- # that:
- # - dhcp_options.changed
+ - assert:
+ that:
+ - dhcp_options.changed
- # FIXME: doesn't delete the original set
- name: create a DHCP option set that inherits from the default set and deletes the original set
ec2_vpc_dhcp_option:
state: present
@@ -348,11 +359,11 @@
that:
- dhcp_options.changed
- dhcp_options.new_options
- - dhcp_options.new_options.keys() | list | sort is superset(['domain-name', 'netbios-name-servers', 'netbios-node-type', 'ntp-servers'])
+ - dhcp_options.new_options.keys() | list | sort == ['domain-name', 'domain-name-servers', 'netbios-name-servers', 'netbios-node-type', 'ntp-servers']
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- - dhcp_options.new_options['netbios-node-type'] in ['1', ['1']]
- - dhcp_options.new_options['domain-name'] in ["{{ aws_domain_name }}", ["{{ aws_domain_name }}"]]
+ - dhcp_options.new_options['netbios-node-type'] == '1'
+ - dhcp_options.new_options['domain-name'] == ['{{ aws_domain_name }}']
- original_dhcp_options_id != dhcp_options.dhcp_options_id
- set_fact:
@@ -368,10 +379,10 @@
- assert:
that:
- - new_config.keys() | list | sort is superset(['domain-name', 'netbios-name-servers', 'netbios-node-type', 'ntp-servers'])
- - new_config['domain-name'][0]['value'] in ["{{ aws_domain_name }}", ["{{ aws_domain_name }}"]]
- - new_config['ntp-servers'] | community.general.json_query('[*].value') | list | sort == ['10.0.0.2', '10.0.1.2']
- - new_config['netbios-name-servers'] | community.general.json_query('[*].value') | list | sort == ['10.0.0.1', '10.0.1.1']
+ - new_config.keys() | list | sort == ['domain-name', 'domain-name-servers', 'netbios-name-servers', 'netbios-node-type', 'ntp-servers']
+ - new_config['domain-name'][0]['value'] == '{{ aws_domain_name }}'
+ - new_config['ntp-servers'] | map(attribute='value') | list | sort == ['10.0.0.2', '10.0.1.2']
+ - new_config['netbios-name-servers'] | map(attribute='value') | list | sort == ['10.0.0.1', '10.0.1.1']
- new_config['netbios-node-type'][0]['value'] == '1'
- name: verify the original set was deleted
@@ -384,12 +395,6 @@
that:
- dhcp_options.failed
- '"does not exist" in dhcp_options.error.message'
- ignore_errors: yes # FIXME - remove line and the following retry tasks
-
- - name: try to delete the original again
- ec2_vpc_dhcp_option:
- dhcp_options_id: "{{ original_dhcp_options_id }}"
- state: absent
- name: verify the original set was deleted
ec2_vpc_dhcp_option_info:
@@ -399,7 +404,6 @@
- assert:
that:
- - dhcp_options.failed
- '"does not exist" in dhcp_options.error.message'
- set_fact:
@@ -407,22 +411,21 @@
# Create a DHCP option set that does not inherit from the old set and deletes the old set
- # FIXME: check mode causes a traceback
- #- name: create a DHCP option set that does not inherit from the default set and deletes the original set (check mode)
- # ec2_vpc_dhcp_option:
- # state: present
- # vpc_id: "{{ vpc_id }}"
- # inherit_existing: False
- # domain_name: "{{ (aws_region == 'us-east-1') | ternary('ec2.internal', aws_region + '.compute.internal') }}"
- # dns_servers:
- # - AmazonProvidedDNS
- # delete_old: True
- # register: dhcp_options
- # check_mode: true
-
- #- assert:
- # that:
- # - dhcp_options.changed
+ - name: create a DHCP option set that does not inherit from the default set and deletes the original set (check mode)
+ ec2_vpc_dhcp_option:
+ state: present
+ vpc_id: "{{ vpc_id }}"
+ inherit_existing: False
+ domain_name: '{{ aws_domain_name }}'
+ dns_servers:
+ - AmazonProvidedDNS
+ delete_old: True
+ register: dhcp_options
+ check_mode: true
+
+ - assert:
+ that:
+ - dhcp_options.changed
- name: create a DHCP option set that does not inherit from the default set and deletes the original set
ec2_vpc_dhcp_option:
@@ -439,8 +442,8 @@
that:
- dhcp_options.new_options
- dhcp_options.new_options.keys() | list | sort is superset(['domain-name', 'domain-name-servers'])
- - dhcp_options.new_options['domain-name'] in ["{{ aws_domain_name }}", ["{{ aws_domain_name }}"]]
- - dhcp_options.new_options['domain-name-servers'] in ['AmazonProvidedDNS', ['AmazonProvidedDNS']]
+ - dhcp_options.new_options['domain-name'] == ['{{ aws_domain_name }}']
+ - dhcp_options.new_options['domain-name-servers'] == ['AmazonProvidedDNS']
- original_dhcp_options_id != dhcp_options.dhcp_options_id
- set_fact:
@@ -457,7 +460,7 @@
- assert:
that:
- new_config.keys() | list | sort == ['domain-name', 'domain-name-servers']
- - new_config['domain-name'][0]['value'] == "{{ aws_domain_name }}"
+ - new_config['domain-name'][0]['value'] == '{{ aws_domain_name }}'
- new_config['domain-name-servers'][0]['value'] == 'AmazonProvidedDNS'
- name: verify the original set was deleted
@@ -476,28 +479,28 @@
# Create a DHCP option set with tags
- # FIXME: check mode causes a traceback
- #- name: create a DHCP option set with tags (check mode)
- # ec2_vpc_dhcp_option:
- # state: present
- # vpc_id: "{{ vpc_id }}"
- # inherit_existing: False
- # delete_old: True
- # ntp_servers:
- # - 10.0.0.2
- # - 10.0.1.2
- # netbios_name_servers:
- # - 10.0.0.1
- # - 10.0.1.1
- # tags:
- # CreatedBy: ansible-test
- # Collection: amazon.aws
- # register: dhcp_options
- # check_mode: true
-
- #- assert:
- # that:
- # - dhcp_options.changed
+ - name: create a DHCP option set with tags (check mode)
+ ec2_vpc_dhcp_option:
+ state: present
+ vpc_id: "{{ vpc_id }}"
+ inherit_existing: False
+ delete_old: True
+ ntp_servers:
+ - 10.0.0.2
+ - 10.0.1.2
+ netbios_name_servers:
+ - 10.0.0.1
+ - 10.0.1.1
+ tags:
+ CreatedBy: ansible-test
+ Collection: amazon.aws
+ register: dhcp_options
+ check_mode: true
+ ignore_errors: true
+
+ - assert:
+ that:
+ - dhcp_options.changed
- name: create a DHCP option set with tags
ec2_vpc_dhcp_option:
@@ -516,6 +519,9 @@
Collection: amazon.aws
register: dhcp_options
+ - set_fact:
+ dhcp_options_config: "{{ dhcp_options.dhcp_options.dhcp_configurations | items2dict(key_name='key', value_name='values') }}"
+
- assert:
that:
- dhcp_options.changed
@@ -523,15 +529,16 @@
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- original_dhcp_options_id != dhcp_options.dhcp_options_id
- # FIXME: tags are not returned by the module
-# - dhcp_options.tags.keys() | length == 2
-# - dhcp_options.tags['CreatedBy'] is 'ansible-test'
-# - dhcp_options.tags['Collection'] is 'amazon.aws'
+ # We return the list of dicts that boto gives us, in addition to the user-friendly config dict
+ - dhcp_options_config['ntp-servers'] | map(attribute='value') | list | sort == ['10.0.0.2', '10.0.1.2']
+ - dhcp_options_config['netbios-name-servers'] | map(attribute='value') | list | sort == ['10.0.0.1', '10.0.1.1']
+ - dhcp_options.dhcp_options.tags.keys() | length == 2
+ - dhcp_options.dhcp_options.tags['CreatedBy'] == 'ansible-test'
+ - dhcp_options.dhcp_options.tags['Collection'] == 'amazon.aws'
- set_fact:
new_dhcp_options_id: "{{ dhcp_options.dhcp_options_id }}"
- # FIXME: ec2_vpc_dhcp_option_info needs to use camel_dict_to_snake_dict(..., ignore_list=['Tags'])
- name: check if the expected tags are associated
ec2_vpc_dhcp_option_info:
dhcp_options_ids: ["{{ new_dhcp_options_id }}"]
@@ -541,8 +548,8 @@
that:
- dhcp_options_info.dhcp_options[0].tags is defined
- dhcp_options_info.dhcp_options[0].tags | length == 2
- - dhcp_options_info.dhcp_options[0].tags['collection'] == 'amazon.aws'
- - dhcp_options_info.dhcp_options[0].tags['created_by'] == 'ansible-test'
+ - dhcp_options_info.dhcp_options[0].tags['Collection'] == 'amazon.aws'
+ - dhcp_options_info.dhcp_options[0].tags['CreatedBy'] == 'ansible-test'
- name: test no changes with the same tags (check mode)
ec2_vpc_dhcp_option:
@@ -598,10 +605,13 @@
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
- new_dhcp_options_id == dhcp_options.dhcp_options_id
+ - dhcp_options.dhcp_options.tags.keys() | length == 2
+ - dhcp_options.dhcp_options.tags['CreatedBy'] == 'ansible-test'
+ - dhcp_options.dhcp_options.tags['Collection'] == 'amazon.aws'
- dhcp_options_info.dhcp_options[0].tags is defined
- dhcp_options_info.dhcp_options[0].tags.keys() | length == 2
- - dhcp_options_info.dhcp_options[0].tags['collection'] == 'amazon.aws'
- - dhcp_options_info.dhcp_options[0].tags['created_by'] == 'ansible-test'
+ - dhcp_options_info.dhcp_options[0].tags['Collection'] == 'amazon.aws'
+ - dhcp_options_info.dhcp_options[0].tags['CreatedBy'] == 'ansible-test'
- name: test no changes without specifying tags (check mode)
ec2_vpc_dhcp_option:
@@ -615,13 +625,14 @@
netbios_name_servers:
- 10.0.0.1
- 10.0.1.1
+ purge_tags: False
register: dhcp_options
check_mode: true
- assert:
that:
- not dhcp_options.changed
- - dhcp_options.new_options.keys() | list | sort == ['netbios-name-servers', 'ntp-servers']
+ - dhcp_options.new_options.keys() | list | sort is superset(['netbios-name-servers', 'ntp-servers'])
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
@@ -637,6 +648,7 @@
netbios_name_servers:
- 10.0.0.1
- 10.0.1.1
+ purge_tags: False
register: dhcp_options
- name: check if the expected tags are associated
@@ -647,16 +659,15 @@
- assert:
that:
- not dhcp_options.changed
- - dhcp_options.new_options.keys() | list | sort == ['netbios-name-servers', 'ntp-servers']
+ - dhcp_options.new_options.keys() | list | sort is superset(['netbios-name-servers', 'ntp-servers'])
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
- new_dhcp_options_id == dhcp_options.dhcp_options_id
- dhcp_options_info.dhcp_options[0].tags is defined
- dhcp_options_info.dhcp_options[0].tags.keys() | length == 2
- - dhcp_options_info.dhcp_options[0].tags['collection'] == 'amazon.aws'
- - dhcp_options_info.dhcp_options[0].tags['created_by'] == 'ansible-test'
+ - dhcp_options_info.dhcp_options[0].tags['Collection'] == 'amazon.aws'
+ - dhcp_options_info.dhcp_options[0].tags['CreatedBy'] == 'ansible-test'
- # FIXME: the additional tag is silently ignored and no change is made
- name: add a tag without using dhcp_options_id
ec2_vpc_dhcp_option:
state: present
@@ -682,40 +693,43 @@
- assert:
that:
- #- dhcp_options.changed
- - dhcp_options.new_options.keys() | list | sort == ['netbios-name-servers', 'ntp-servers']
+ - dhcp_options.changed
+ - dhcp_options.new_options.keys() | list | sort is superset(['netbios-name-servers', 'ntp-servers'])
- dhcp_options.new_options['netbios-name-servers'] | sort == ['10.0.0.1', '10.0.1.1']
- dhcp_options.new_options['ntp-servers'] | sort == ['10.0.0.2', '10.0.1.2']
- new_dhcp_options_id == dhcp_options.dhcp_options_id
+ - dhcp_options.dhcp_options.tags.keys() | length == 3
+ - dhcp_options.dhcp_options.tags['another'] == 'tag'
+ - dhcp_options.dhcp_options.tags['CreatedBy'] == 'ansible-test'
+ - dhcp_options.dhcp_options.tags['Collection'] == 'amazon.aws'
- dhcp_options_info.dhcp_options[0].tags is defined
- - dhcp_options_info.dhcp_options[0].tags.keys() | length == 2
- #- dhcp_options_info.dhcp_options[0].tags.keys() | length == 3
- - dhcp_options_info.dhcp_options[0].tags['collection'] == 'amazon.aws'
- - dhcp_options_info.dhcp_options[0].tags['created_by'] == 'ansible-test'
-
- # FIXME: another check_mode traceback
- #- name: add and removing tags (check mode)
- # ec2_vpc_dhcp_option:
- # dhcp_options_id: "{{ dhcp_options.dhcp_options_id }}"
- # state: present
- # vpc_id: "{{ vpc_id }}"
- # inherit_existing: False
- # delete_old: True
- # ntp_servers:
- # - 10.0.0.2
- # - 10.0.1.2
- # netbios_name_servers:
- # - 10.0.0.1
- # - 10.0.1.1
- # tags:
- # AnsibleTest: integration
- # Collection: amazon.aws
- # register: dhcp_options
- # check_mode: true
-
- #- assert:
- # that:
- # - dhcp_options.changed
+ - dhcp_options_info.dhcp_options[0].tags.keys() | length == 3
+ - dhcp_options_info.dhcp_options[0].tags['another'] == 'tag'
+ - dhcp_options_info.dhcp_options[0].tags['Collection'] == 'amazon.aws'
+ - dhcp_options_info.dhcp_options[0].tags['CreatedBy'] == 'ansible-test'
+
+ - name: add and removing tags (check mode)
+ ec2_vpc_dhcp_option:
+ dhcp_options_id: "{{ dhcp_options.dhcp_options_id }}"
+ state: present
+ vpc_id: "{{ vpc_id }}"
+ inherit_existing: False
+ delete_old: True
+ ntp_servers:
+ - 10.0.0.2
+ - 10.0.1.2
+ netbios_name_servers:
+ - 10.0.0.1
+ - 10.0.1.1
+ tags:
+ AnsibleTest: integration
+ Collection: amazon.aws
+ register: dhcp_options
+ check_mode: true
+
+ - assert:
+ that:
+ - dhcp_options.changed
- name: add and remove tags
ec2_vpc_dhcp_option:
@@ -743,12 +757,14 @@
- assert:
that:
- dhcp_options.changed
- - not dhcp_options.new_options
+ - dhcp_options.dhcp_options.tags.keys() | length == 2
+ - dhcp_options.dhcp_options.tags['AnsibleTest'] == 'integration'
+ - dhcp_options.dhcp_options.tags['Collection'] == 'amazon.aws'
- new_dhcp_options_id == dhcp_options.dhcp_options_id
- dhcp_options_info.dhcp_options[0].tags is defined
- dhcp_options_info.dhcp_options[0].tags.keys() | length == 2
- - dhcp_options_info.dhcp_options[0].tags['collection'] == 'amazon.aws'
- - dhcp_options_info.dhcp_options[0].tags['ansible_test'] == 'integration'
+ - dhcp_options_info.dhcp_options[0].tags['Collection'] == 'amazon.aws'
+ - dhcp_options_info.dhcp_options[0].tags['AnsibleTest'] == 'integration'
- name: add tags with different cases
ec2_vpc_dhcp_option:
@@ -778,17 +794,19 @@
- assert:
that:
- dhcp_options.changed
- - not dhcp_options.new_options
- new_dhcp_options_id == dhcp_options.dhcp_options_id
+ - dhcp_options.dhcp_options.tags.keys() | length == 4
+ - dhcp_options.dhcp_options.tags['lowercase spaced'] == 'hello cruel world'
+ - dhcp_options.dhcp_options.tags['Title Case'] == 'Hello Cruel World'
+ - dhcp_options.dhcp_options.tags['CamelCase'] == 'SimpleCamelCase'
+ - dhcp_options.dhcp_options.tags['snake_case'] == 'simple_snake_case'
- dhcp_options_info.dhcp_options[0].tags is defined
- dhcp_options_info.dhcp_options[0].tags.keys() | length == 4
- dhcp_options_info.dhcp_options[0].tags['lowercase spaced'] == 'hello cruel world'
-# FIXME: these tags are returned incorrectly now
-# - dhcp_options_info.dhcp_options[0].tags['Title Case'] == 'Hello Cruel World'
-# - dhcp_options_info.dhcp_options[0].tags['CamelCase'] == 'SimpleCamelCase'
+ - dhcp_options_info.dhcp_options[0].tags['Title Case'] == 'Hello Cruel World'
+ - dhcp_options_info.dhcp_options[0].tags['CamelCase'] == 'SimpleCamelCase'
- dhcp_options_info.dhcp_options[0].tags['snake_case'] == 'simple_snake_case'
- # FIXME does nothing, but reports changed
- name: test purging all tags
ec2_vpc_dhcp_option:
dhcp_options_id: "{{ dhcp_options.dhcp_options_id }}"
@@ -814,9 +832,9 @@
that:
- dhcp_options.changed
- new_dhcp_options_id == dhcp_options.dhcp_options_id
- #- not dhcp_options_info.dhcp_options[0].tags
+ - not dhcp_options_info.dhcp_options[0].tags
- - name: test no changes removing all tags
+ - name: test removing all tags
ec2_vpc_dhcp_option:
dhcp_options_id: "{{ dhcp_options.dhcp_options_id }}"
state: present
@@ -839,11 +857,10 @@
- assert:
that:
- #- not dhcp_options.changed
+ - dhcp_options.changed
- new_dhcp_options_id == dhcp_options.dhcp_options_id
- #- not dhcp_options_info.dhcp_options[0].tags
+ - not dhcp_options_info.dhcp_options[0].tags
- # FIXME: check mode returns changed as False
- name: remove the DHCP option set (check mode)
ec2_vpc_dhcp_option:
state: absent
@@ -852,11 +869,11 @@
register: dhcp_options
check_mode: true
- #- assert:
- # that:
- # - dhcp_options.changed
+# - assert:
+# that:
+# - dhcp_options.changed
- # FIXME: does nothing - the module should associate "default" with the VPC provided
+ # FIXME: does nothing - the module should associate "default" with the VPC provided but currently does not
- name: removing the DHCP option set
ec2_vpc_dhcp_option:
state: absent
@@ -864,9 +881,9 @@
dhcp_options_id: "{{ new_dhcp_options_id }}"
register: dhcp_options
- #- assert:
- # that:
- # - dhcp_options.changed
+# - assert:
+# that:
+# - dhcp_options.changed
- name: remove the DHCP option set again (check mode)
ec2_vpc_dhcp_option:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+cloud/aws
+disabled
+ec2_vpc_endpoint_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,7 @@
+vpc_name: '{{ resource_prefix }}-vpc'
+vpc_seed: '{{ resource_prefix }}'
+vpc_cidr: 10.{{ 256 | random(seed=vpc_seed) }}.22.0/24
+
+# S3 and EC2 should generally be available...
+endpoint_service_a: com.amazonaws.{{ aws_region }}.s3
+endpoint_service_b: com.amazonaws.{{ aws_region }}.ec2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,863 @@
+- name: ec2_vpc_endpoint tests
+ module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ block:
+ # ============================================================
+ # BEGIN PRE-TEST SETUP
+ - name: create a VPC
+ ec2_vpc_net:
+ state: present
+ name: '{{ vpc_name }}'
+ cidr_block: '{{ vpc_cidr }}'
+ tags:
+ AnsibleTest: ec2_vpc_endpoint
+ AnsibleRun: '{{ resource_prefix }}'
+ register: vpc_creation
+ - name: Assert success
+ assert:
+ that:
+ - vpc_creation is successful
+
+ - name: Create an IGW
+ ec2_vpc_igw:
+ vpc_id: '{{ vpc_creation.vpc.id }}'
+ state: present
+ tags:
+ Name: '{{ resource_prefix }}'
+ AnsibleTest: ec2_vpc_endpoint
+ AnsibleRun: '{{ resource_prefix }}'
+ register: igw_creation
+ - name: Assert success
+ assert:
+ that:
+ - igw_creation is successful
+
+ - name: Create a minimal route table (no routes)
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc_creation.vpc.id }}'
+ tags:
+ AnsibleTest: ec2_vpc_endpoint
+ AnsibleRun: '{{ resource_prefix }}'
+ Name: '{{ resource_prefix }}-empty'
+ subnets: []
+ routes: []
+ register: rtb_creation_empty
+
+ - name: Create a minimal route table (with IGW)
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc_creation.vpc.id }}'
+ tags:
+ AnsibleTest: ec2_vpc_endpoint
+ AnsibleRun: '{{ resource_prefix }}'
+ Name: '{{ resource_prefix }}-igw'
+ subnets: []
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: '{{ igw_creation.gateway_id }}'
+ register: rtb_creation_igw
+
+ - name: Save VPC info in a fact
+ set_fact:
+ vpc_id: '{{ vpc_creation.vpc.id }}'
+ rtb_empty_id: '{{ rtb_creation_empty.route_table.id }}'
+ rtb_igw_id: '{{ rtb_creation_igw.route_table.id }}'
+
+ # ============================================================
+ # BEGIN TESTS
+
+ # Minimal check_mode with _info
+ - name: Fetch Endpoints in check_mode
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ register: endpoint_info
+ check_mode: true
+ - name: Assert success
+ assert:
+ that:
+ # May be run in parallel, the only thing we can guarantee is
+ # - we shouldn't error
+ # - we should return 'vpc_endpoints' (even if it's empty)
+ - endpoint_info is successful
+ - '"vpc_endpoints" in endpoint_info'
+
+ - name: Fetch Services in check_mode
+ ec2_vpc_endpoint_info:
+ query: services
+ register: endpoint_info
+ check_mode: true
+ - name: Assert success
+ assert:
+ that:
+ - endpoint_info is successful
+ - '"service_names" in endpoint_info'
+ # This is just 2 arbitrary AWS services that should (generally) be
+ # available. The actual list will vary over time and between regions
+ - endpoint_service_a in endpoint_info.service_names
+ - endpoint_service_b in endpoint_info.service_names
+
+ # Fetch services without check mode
+ # Note: Filters not supported on services via this module, this is all we can test for now
+ - name: Fetch Services
+ ec2_vpc_endpoint_info:
+ query: services
+ register: endpoint_info
+ - name: Assert success
+ assert:
+ that:
+ - endpoint_info is successful
+ - '"service_names" in endpoint_info'
+ # This is just 2 arbitrary AWS services that should (generally) be
+ # available. The actual list will vary over time and between regions
+ - endpoint_service_a in endpoint_info.service_names
+ - endpoint_service_b in endpoint_info.service_names
+
+ # Attempt to create an endpoint
+ - name: Create minimal endpoint (check mode)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ register: create_endpoint_check
+ check_mode: true
+ - name: Assert changed
+ assert:
+ that:
+ - create_endpoint_check is changed
+
+ - name: Create minimal endpoint
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ wait: true
+ register: create_endpoint
+ - name: Check standard return values
+ assert:
+ that:
+ - create_endpoint is changed
+ - '"result" in create_endpoint'
+ - '"creation_timestamp" in create_endpoint.result'
+ - '"dns_entries" in create_endpoint.result'
+ - '"groups" in create_endpoint.result'
+ - '"network_interface_ids" in create_endpoint.result'
+ - '"owner_id" in create_endpoint.result'
+ - '"policy_document" in create_endpoint.result'
+ - '"private_dns_enabled" in create_endpoint.result'
+ - create_endpoint.result.private_dns_enabled == False
+ - '"requester_managed" in create_endpoint.result'
+ - create_endpoint.result.requester_managed == False
+ - '"service_name" in create_endpoint.result'
+ - create_endpoint.result.service_name == endpoint_service_a
+ - '"state" in create_endpoint.result'
+ - create_endpoint.result.state == "available"
+ - '"vpc_endpoint_id" in create_endpoint.result'
+ - create_endpoint.result.vpc_endpoint_id.startswith("vpce-")
+ - '"vpc_endpoint_type" in create_endpoint.result'
+ - create_endpoint.result.vpc_endpoint_type == "Gateway"
+ - '"vpc_id" in create_endpoint.result'
+ - create_endpoint.result.vpc_id == vpc_id
+
+ - name: Save Endpoint info in a fact
+ set_fact:
+ endpoint_id: '{{ create_endpoint.result.vpc_endpoint_id }}'
+
+ # Pull info about the endpoints
+ - name: Fetch Endpoints (all)
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ register: endpoint_info
+ - name: Assert success
+ assert:
+ that:
+ # We're fetching all endpoints, there's no guarantee what the values
+ # will be
+ - endpoint_info is successful
+ - '"vpc_endpoints" in endpoint_info'
+ - '"creation_timestamp" in first_endpoint'
+ - '"policy_document" in first_endpoint'
+ - '"route_table_ids" in first_endpoint'
+ - first_endpoint.route_table_ids | length == 0
+ - '"service_name" in first_endpoint'
+ - '"state" in first_endpoint'
+ - '"vpc_endpoint_id" in first_endpoint'
+ - '"vpc_id" in first_endpoint'
+ # Not yet documented, but returned
+ - '"dns_entries" in first_endpoint'
+ - '"groups" in first_endpoint'
+ - '"network_interface_ids" in first_endpoint'
+ - '"owner_id" in first_endpoint'
+ - '"private_dns_enabled" in first_endpoint'
+ - '"requester_managed" in first_endpoint'
+ - '"subnet_ids" in first_endpoint'
+ - '"tags" in first_endpoint'
+ - '"vpc_endpoint_type" in first_endpoint'
+ # Make sure our endpoint is included
+ - endpoint_id in ( endpoint_info | community.general.json_query("vpc_endpoints[*].vpc_endpoint_id")
+ | list | flatten )
+ vars:
+ first_endpoint: '{{ endpoint_info.vpc_endpoints[0] }}'
+
+ - name: Fetch Endpoints (targetted by ID)
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ vpc_endpoint_ids: '{{ endpoint_id }}'
+ register: endpoint_info
+ - name: Assert success
+ assert:
+ that:
+ - endpoint_info is successful
+ - '"vpc_endpoints" in endpoint_info'
+ - '"creation_timestamp" in first_endpoint'
+ - '"policy_document" in first_endpoint'
+ - '"route_table_ids" in first_endpoint'
+ - first_endpoint.route_table_ids | length == 0
+ - '"service_name" in first_endpoint'
+ - first_endpoint.service_name == endpoint_service_a
+ - '"state" in first_endpoint'
+ - first_endpoint.state == "available"
+ - '"vpc_endpoint_id" in first_endpoint'
+ - first_endpoint.vpc_endpoint_id == endpoint_id
+ - '"vpc_id" in first_endpoint'
+ - first_endpoint.vpc_id == vpc_id
+ # Not yet documented, but returned
+ - '"dns_entries" in first_endpoint'
+ - '"groups" in first_endpoint'
+ - '"network_interface_ids" in first_endpoint'
+ - '"owner_id" in first_endpoint'
+ - '"private_dns_enabled" in first_endpoint'
+ - first_endpoint.private_dns_enabled == False
+ - '"requester_managed" in first_endpoint'
+ - first_endpoint.requester_managed == False
+ - '"subnet_ids" in first_endpoint'
+ - '"tags" in first_endpoint'
+ - '"vpc_endpoint_type" in first_endpoint'
+ vars:
+ first_endpoint: '{{ endpoint_info.vpc_endpoints[0] }}'
+
+ - name: Fetch Endpoints (targetted by VPC)
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ filters:
+ vpc-id:
+ - '{{ vpc_id }}'
+ register: endpoint_info
+ - name: Assert success
+ assert:
+ that:
+ - endpoint_info is successful
+ - '"vpc_endpoints" in endpoint_info'
+ - '"creation_timestamp" in first_endpoint'
+ - '"policy_document" in first_endpoint'
+ - '"route_table_ids" in first_endpoint'
+ - '"service_name" in first_endpoint'
+ - first_endpoint.service_name == endpoint_service_a
+ - '"state" in first_endpoint'
+ - first_endpoint.state == "available"
+ - '"vpc_endpoint_id" in first_endpoint'
+ - first_endpoint.vpc_endpoint_id == endpoint_id
+ - '"vpc_id" in first_endpoint'
+ - first_endpoint.vpc_id == vpc_id
+ # Not yet documented, but returned
+ - '"dns_entries" in first_endpoint'
+ - '"groups" in first_endpoint'
+ - '"network_interface_ids" in first_endpoint'
+ - '"owner_id" in first_endpoint'
+ - '"private_dns_enabled" in first_endpoint'
+ - first_endpoint.private_dns_enabled == False
+ - '"requester_managed" in first_endpoint'
+ - first_endpoint.requester_managed == False
+ - '"subnet_ids" in first_endpoint'
+ - '"tags" in first_endpoint'
+ - '"vpc_endpoint_type" in first_endpoint'
+ vars:
+ first_endpoint: '{{ endpoint_info.vpc_endpoints[0] }}'
+
+
+ # matches on parameters without explicitly passing the endpoint ID
+ - name: Create minimal endpoint - idempotency (check mode)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ register: create_endpoint_idem_check
+ check_mode: true
+ - assert:
+ that:
+ - create_endpoint_idem_check is not changed
+
+ - name: Create minimal endpoint - idempotency
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ register: create_endpoint_idem
+ - assert:
+ that:
+ - create_endpoint_idem is not changed
+
+ - name: Delete minimal endpoint by ID (check_mode)
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ endpoint_id }}'
+ check_mode: true
+ register: endpoint_delete_check
+ - assert:
+ that:
+ - endpoint_delete_check is changed
+
+
+ - name: Delete minimal endpoint by ID
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ endpoint_id }}'
+ register: endpoint_delete_check
+ - assert:
+ that:
+ - endpoint_delete_check is changed
+
+ - name: Delete minimal endpoint by ID - idempotency (check_mode)
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ endpoint_id }}'
+ check_mode: true
+ register: endpoint_delete_check
+ - assert:
+ that:
+ - endpoint_delete_check is not changed
+
+ - name: Delete minimal endpoint by ID - idempotency
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ endpoint_id }}'
+ register: endpoint_delete_check
+ - assert:
+ that:
+ - endpoint_delete_check is not changed
+
+ - name: Fetch Endpoints by ID (expect failed)
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ vpc_endpoint_ids: '{{ endpoint_id }}'
+ ignore_errors: true
+ register: endpoint_info
+ - name: Assert endpoint does not exist
+ assert:
+ that:
+ - endpoint_info is successful
+ - '"does not exist" in endpoint_info.msg'
+ - endpoint_info.vpc_endpoints | length == 0
+
+ # Attempt to create an endpoint with a route table
+ - name: Create an endpoint with route table (check mode)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_empty_id }}'
+ register: create_endpoint_check
+ check_mode: true
+ - name: Assert changed
+ assert:
+ that:
+ - create_endpoint_check is changed
+
+ - name: Create an endpoint with route table
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_empty_id }}'
+ wait: true
+ register: create_rtb_endpoint
+ - name: Check standard return values
+ assert:
+ that:
+ - create_rtb_endpoint is changed
+ - '"result" in create_rtb_endpoint'
+ - '"creation_timestamp" in create_rtb_endpoint.result'
+ - '"dns_entries" in create_rtb_endpoint.result'
+ - '"groups" in create_rtb_endpoint.result'
+ - '"network_interface_ids" in create_rtb_endpoint.result'
+ - '"owner_id" in create_rtb_endpoint.result'
+ - '"policy_document" in create_rtb_endpoint.result'
+ - '"private_dns_enabled" in create_rtb_endpoint.result'
+ - '"route_table_ids" in create_rtb_endpoint.result'
+ - create_rtb_endpoint.result.route_table_ids | length == 1
+ - create_rtb_endpoint.result.route_table_ids[0] == '{{ rtb_empty_id }}'
+ - create_rtb_endpoint.result.private_dns_enabled == False
+ - '"requester_managed" in create_rtb_endpoint.result'
+ - create_rtb_endpoint.result.requester_managed == False
+ - '"service_name" in create_rtb_endpoint.result'
+ - create_rtb_endpoint.result.service_name == endpoint_service_a
+ - '"state" in create_endpoint.result'
+ - create_rtb_endpoint.result.state == "available"
+ - '"vpc_endpoint_id" in create_rtb_endpoint.result'
+ - create_rtb_endpoint.result.vpc_endpoint_id.startswith("vpce-")
+ - '"vpc_endpoint_type" in create_rtb_endpoint.result'
+ - create_rtb_endpoint.result.vpc_endpoint_type == "Gateway"
+ - '"vpc_id" in create_rtb_endpoint.result'
+ - create_rtb_endpoint.result.vpc_id == vpc_id
+
+ - name: Save Endpoint info in a fact
+ set_fact:
+ rtb_endpoint_id: '{{ create_rtb_endpoint.result.vpc_endpoint_id }}'
+
+ - name: Create an endpoint with route table - idempotency (check mode)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_empty_id }}'
+ register: create_endpoint_check
+ check_mode: true
+ - name: Assert changed
+ assert:
+ that:
+ - create_endpoint_check is not changed
+
+ - name: Create an endpoint with route table - idempotency
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_empty_id }}'
+ register: create_endpoint_check
+ check_mode: true
+ - name: Assert changed
+ assert:
+ that:
+ - create_endpoint_check is not changed
+
+# # Endpoint modifications are not yet supported by the module
+# # A Change the route table for the endpoint
+# - name: Change the route table for the endpoint (check_mode)
+# ec2_vpc_endpoint:
+# state: present
+# vpc_id: '{{ vpc_id }}'
+# vpc_endpoint_id: "{{ rtb_endpoint_id }}"
+# service: '{{ endpoint_service_a }}'
+# route_table_ids:
+# - '{{ rtb_igw_id }}'
+# check_mode: True
+# register: check_two_rtbs_endpoint
+#
+# - name: Assert second route table would be added
+# assert:
+# that:
+# - check_two_rtbs_endpoint.changed
+#
+# - name: Change the route table for the endpoint
+# ec2_vpc_endpoint:
+# state: present
+# vpc_id: '{{ vpc_id }}'
+# vpc_endpoint_id: "{{ rtb_endpoint_id }}"
+# service: '{{ endpoint_service_a }}'
+# route_table_ids:
+# - '{{ rtb_igw_id }}'
+# register: two_rtbs_endpoint
+#
+# - name: Assert second route table would be added
+# assert:
+# that:
+# - check_two_rtbs_endpoint.changed
+# - two_rtbs_endpoint.result.route_table_ids | length == 1
+# - two_rtbs_endpoint.result.route_table_ids[0] == '{{ rtb_igw_id }}'
+#
+# - name: Change the route table for the endpoint - idempotency (check_mode)
+# ec2_vpc_endpoint:
+# state: present
+# vpc_id: '{{ vpc_id }}'
+# vpc_endpoint_id: "{{ rtb_endpoint_id }}"
+# service: '{{ endpoint_service_a }}'
+# route_table_ids:
+# - '{{ rtb_igw_id }}'
+# check_mode: True
+# register: check_two_rtbs_endpoint
+#
+# - name: Assert route table would not change
+# assert:
+# that:
+# - not check_two_rtbs_endpoint.changed
+#
+# - name: Change the route table for the endpoint - idempotency
+# ec2_vpc_endpoint:
+# state: present
+# vpc_id: '{{ vpc_id }}'
+# vpc_endpoint_id: "{{ rtb_endpoint_id }}"
+# service: '{{ endpoint_service_a }}'
+# route_table_ids:
+# - '{{ rtb_igw_id }}'
+# register: two_rtbs_endpoint
+#
+# - name: Assert route table would not change
+# assert:
+# that:
+# - not check_two_rtbs_endpoint.changed
+
+ - name: Tag the endpoint (check_mode)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_empty_id }}'
+ tags:
+ camelCase: helloWorld
+ PascalCase: HelloWorld
+ snake_case: hello_world
+ Title Case: Hello World
+ lowercase spaced: hello world
+ check_mode: true
+ register: check_tag_vpc_endpoint
+
+ - name: Assert tags would have changed
+ assert:
+ that:
+ - check_tag_vpc_endpoint.changed
+
+ - name: Tag the endpoint
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_igw_id }}'
+ tags:
+ testPrefix: '{{ resource_prefix }}'
+ camelCase: helloWorld
+ PascalCase: HelloWorld
+ snake_case: hello_world
+ Title Case: Hello World
+ lowercase spaced: hello world
+ register: tag_vpc_endpoint
+
+ - name: Assert tags are successful
+ assert:
+ that:
+ - tag_vpc_endpoint.changed
+ - tag_vpc_endpoint.result.tags | length == 6
+ - endpoint_tags["testPrefix"] == resource_prefix
+ - endpoint_tags["camelCase"] == "helloWorld"
+ - endpoint_tags["PascalCase"] == "HelloWorld"
+ - endpoint_tags["snake_case"] == "hello_world"
+ - endpoint_tags["Title Case"] == "Hello World"
+ - endpoint_tags["lowercase spaced"] == "hello world"
+ vars:
+ endpoint_tags: "{{ tag_vpc_endpoint.result.tags | items2dict(key_name='Key',\
+ \ value_name='Value') }}"
+
+ - name: Query by tag
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ filters:
+ tag:testPrefix:
+ - '{{ resource_prefix }}'
+ register: tag_result
+
+ - name: Assert tag lookup found endpoint
+ assert:
+ that:
+ - tag_result is successful
+ - '"vpc_endpoints" in tag_result'
+ - first_endpoint.vpc_endpoint_id == rtb_endpoint_id
+ vars:
+ first_endpoint: '{{ tag_result.vpc_endpoints[0] }}'
+
+ - name: Tag the endpoint - idempotency (check_mode)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_igw_id }}'
+ tags:
+ testPrefix: '{{ resource_prefix }}'
+ camelCase: helloWorld
+ PascalCase: HelloWorld
+ snake_case: hello_world
+ Title Case: Hello World
+ lowercase spaced: hello world
+ register: tag_vpc_endpoint_again
+
+ - name: Assert tags would not change
+ assert:
+ that:
+ - not tag_vpc_endpoint_again.changed
+
+ - name: Tag the endpoint - idempotency
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_igw_id }}'
+ tags:
+ testPrefix: '{{ resource_prefix }}'
+ camelCase: helloWorld
+ PascalCase: HelloWorld
+ snake_case: hello_world
+ Title Case: Hello World
+ lowercase spaced: hello world
+ register: tag_vpc_endpoint_again
+
+ - name: Assert tags would not change
+ assert:
+ that:
+ - not tag_vpc_endpoint_again.changed
+
+ - name: Add a tag (check_mode)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_igw_id }}'
+ tags:
+ new_tag: ANewTag
+ check_mode: true
+ register: check_tag_vpc_endpoint
+
+ - name: Assert tags would have changed
+ assert:
+ that:
+ - check_tag_vpc_endpoint.changed
+
+ - name: Add a tag (purge_tags=False)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_igw_id }}'
+ tags:
+ new_tag: ANewTag
+ register: add_tag_vpc_endpoint
+
+ - name: Assert tags changed
+ assert:
+ that:
+ - add_tag_vpc_endpoint.changed
+ - add_tag_vpc_endpoint.result.tags | length == 7
+ - endpoint_tags["testPrefix"] == resource_prefix
+ - endpoint_tags["camelCase"] == "helloWorld"
+ - endpoint_tags["PascalCase"] == "HelloWorld"
+ - endpoint_tags["snake_case"] == "hello_world"
+ - endpoint_tags["Title Case"] == "Hello World"
+ - endpoint_tags["lowercase spaced"] == "hello world"
+ - endpoint_tags["new_tag"] == "ANewTag"
+ vars:
+ endpoint_tags: "{{ add_tag_vpc_endpoint.result.tags | items2dict(key_name='Key',\
+ \ value_name='Value') }}"
+
+ - name: Add a tag (purge_tags=True)
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ service: '{{ endpoint_service_a }}'
+ route_table_ids:
+ - '{{ rtb_igw_id }}'
+ tags:
+ another_new_tag: AnotherNewTag
+ purge_tags: true
+ register: purge_tag_vpc_endpoint
+
+ - name: Assert tags changed
+ assert:
+ that:
+ - purge_tag_vpc_endpoint.changed
+ - purge_tag_vpc_endpoint.result.tags | length == 1
+ - endpoint_tags["another_new_tag"] == "AnotherNewTag"
+ vars:
+ endpoint_tags: "{{ purge_tag_vpc_endpoint.result.tags | items2dict(key_name='Key',\
+ \ value_name='Value') }}"
+
+ - name: Delete minimal route table (no routes)
+ ec2_vpc_route_table:
+ state: absent
+ lookup: id
+ route_table_id: '{{ rtb_empty_id }}'
+ register: rtb_delete
+ - assert:
+ that:
+ - rtb_delete is changed
+
+ - name: Delete minimal route table (IGW route)
+ ec2_vpc_route_table:
+ state: absent
+ lookup: id
+ route_table_id: '{{ rtb_igw_id }}'
+ - assert:
+ that:
+ - rtb_delete is changed
+
+ - name: Delete route table endpoint by ID
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ register: endpoint_delete_check
+ - assert:
+ that:
+ - endpoint_delete_check is changed
+
+ - name: Delete minimal endpoint by ID - idempotency (check_mode)
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ rtb_endpoint_id }}'
+ check_mode: true
+ register: endpoint_delete_check
+ - assert:
+ that:
+ - endpoint_delete_check is not changed
+
+ - name: Delete endpoint by ID - idempotency
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ endpoint_id }}'
+ register: endpoint_delete_check
+ - assert:
+ that:
+ - endpoint_delete_check is not changed
+
+ - name: Create interface endpoint
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ vpc_endpoint_type: Interface
+ register: create_interface_endpoint
+ - name: Check that the interface endpoint was created properly
+ assert:
+ that:
+ - create_interface_endpoint is changed
+ - create_interface_endpoint.result.vpc_endpoint_type == "Interface"
+ - name: Delete interface endpoint
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ create_interface_endpoint.result.vpc_endpoint_id }}'
+ register: interface_endpoint_delete_check
+ - assert:
+ that:
+ - interface_endpoint_delete_check is changed
+
+ - name: Create a subnet
+ ec2_vpc_subnet:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ az: "{{ aws_region}}a"
+ cidr: "{{ vpc_cidr }}"
+ register: interface_endpoint_create_subnet_check
+
+ - name: Create a security group
+ ec2_group:
+ name: securitygroup-prodext
+ description: "security group for Ansible interface endpoint"
+ state: present
+ vpc_id: "{{ vpc.vpc.id }}"
+ rules:
+ - proto: tcp
+ from_port: 1
+ to_port: 65535
+ cidr_ip: 0.0.0.0/0
+ register: interface_endpoint_create_sg_check
+
+ - name: Create interface endpoint attached to a subnet
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ service: '{{ endpoint_service_a }}'
+ vpc_endpoint_type: Interface
+ vpc_endpoint_subnets: "{{ interface_endpoint_create_subnet_check.subnet.id') }}"
+ vpc_endpoint_security_groups: "{{ interface_endpoint_create_sg_check.group_id }}"
+ register: create_interface_endpoint_with_sg_subnets
+ - name: Check that the interface endpoint was created properly
+ assert:
+ that:
+ - create_interface_endpoint_with_sg_subnets is changed
+ - create_interface_endpoint_with_sg_subnets.result.vpc_endpoint_type == "Interface"
+
+ - name: Delete interface endpoint
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: "{{ create_interface_endpoint_with_sg_subnets.result.vpc_endpoint_id }}"
+ register: create_interface_endpoint_with_sg_subnets_delete_check
+ - assert:
+ that:
+ - create_interface_endpoint_with_sg_subnets_delete_check is changed
+
+ # ============================================================
+ # BEGIN POST-TEST CLEANUP
+ always:
+ # Delete the routes first - you can't delete an endpoint with a route
+ # attached.
+ - name: Delete minimal route table (no routes)
+ ec2_vpc_route_table:
+ state: absent
+ lookup: id
+ route_table_id: '{{ rtb_creation_empty.route_table.id }}'
+ ignore_errors: true
+
+ - name: Delete minimal route table (IGW route)
+ ec2_vpc_route_table:
+ state: absent
+ lookup: id
+ route_table_id: '{{ rtb_creation_igw.route_table.id }}'
+ ignore_errors: true
+
+ - name: Delete endpoint
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ create_endpoint.result.vpc_endpoint_id }}'
+ ignore_errors: true
+
+ - name: Delete endpoint
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ create_rtb_endpoint.result.vpc_endpoint_id }}'
+ ignore_errors: true
+
+ - name: Query any remain endpoints we created (idempotency work is ongoing) # FIXME
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ filters:
+ vpc-id:
+ - '{{ vpc_id }}'
+ register: test_endpoints
+
+ - name: Delete all endpoints
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ item.vpc_endpoint_id }}'
+ with_items: '{{ test_endpoints.vpc_endpoints }}'
+ ignore_errors: true
+
+ - name: Remove IGW
+ ec2_vpc_igw:
+ state: absent
+ vpc_id: '{{ vpc_id }}'
+ register: igw_deletion
+ retries: 10
+ delay: 5
+ until: igw_deletion is success
+ ignore_errors: yes
+
+ - name: Remove VPC
+ ec2_vpc_net:
+ state: absent
+ name: '{{ vpc_name }}'
+ cidr_block: '{{ vpc_cidr }}'
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+cloud/aws
+ec2_vpc_endpoint_service_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+search_service_names:
+- 'com.amazonaws.{{ aws_region }}.s3'
+- 'com.amazonaws.{{ aws_region }}.ec2'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+dependencies:
+ - setup_remote_tmp_dir
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_endpoint_service_info/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,135 @@
+---
+- module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ collections:
+ - amazon.aws
+ - community.aws
+ block:
+
+ - name: 'List all available services (Check Mode)'
+ ec2_vpc_endpoint_service_info:
+ check_mode: True
+ register: services_check
+
+ - name: 'Verify services (Check Mode)'
+ vars:
+ first_service: '{{ services_check.service_details[0] }}'
+ assert:
+ that:
+ - services_check is successful
+ - services_check is not changed
+ - '"service_names" in services_check'
+ - '"service_details" in services_check'
+ - '"acceptance_required" in first_service'
+ - '"availability_zones" in first_service'
+ - '"base_endpoint_dns_names" in first_service'
+ - '"manages_vpc_endpoints" in first_service'
+ - '"owner" in first_service'
+ - '"private_dns_name" in first_service'
+ - '"private_dns_name_verification_state" in first_service'
+ - '"service_id" in first_service'
+ - '"service_name" in first_service'
+ - '"service_type" in first_service'
+ - '"tags" in first_service'
+ - '"vpc_endpoint_policy_supported" in first_service'
+
+ - name: 'List all available services'
+ ec2_vpc_endpoint_service_info:
+ register: services_info
+
+ - name: 'Verify services'
+ vars:
+ first_service: '{{ services_info.service_details[0] }}'
+ assert:
+ that:
+ - services_info is successful
+ - services_info is not changed
+ - '"service_names" in services_info'
+ - '"service_details" in services_info'
+ - '"acceptance_required" in first_service'
+ - '"availability_zones" in first_service'
+ - '"base_endpoint_dns_names" in first_service'
+ - '"manages_vpc_endpoints" in first_service'
+ - '"owner" in first_service'
+ - '"private_dns_name" in first_service'
+ - '"private_dns_name_verification_state" in first_service'
+ - '"service_id" in first_service'
+ - '"service_name" in first_service'
+ - '"service_type" in first_service'
+ - '"tags" in first_service'
+ - '"vpc_endpoint_policy_supported" in first_service'
+
+ - name: 'Limit services by name'
+ ec2_vpc_endpoint_service_info:
+ service_names: '{{ search_service_names }}'
+ register: services_info
+
+ - name: 'Verify services'
+ vars:
+ first_service: '{{ services_info.service_details[0] }}'
+ # The same service sometimes pop up twice. s3 for example has
+ # s3.us-east-1.amazonaws.com and s3.us-east-1.vpce.amazonaws.com which are
+ # part of com.amazonaws.us-east-1.s3 so we need to run the results through
+ # the unique filter to know if we've got what we think we have
+ unique_names: '{{ services_info.service_names | unique | list }}'
+ unique_detail_names: '{{ services_info.service_details | map(attribute="service_name") | unique | list }}'
+ assert:
+ that:
+ - services_info is successful
+ - services_info is not changed
+ - '"service_names" in services_info'
+ - (unique_names | length) == (search_service_names | length)
+ - (unique_detail_names | length ) == (search_service_names | length)
+ - (unique_names | difference(search_service_names) | length) == 0
+ - (unique_detail_names | difference(search_service_names) | length) == 0
+ - '"service_details" in services_info'
+ - '"acceptance_required" in first_service'
+ - '"availability_zones" in first_service'
+ - '"base_endpoint_dns_names" in first_service'
+ - '"manages_vpc_endpoints" in first_service'
+ - '"owner" in first_service'
+ - '"private_dns_name" in first_service'
+ - '"private_dns_name_verification_state" in first_service'
+ - '"service_id" in first_service'
+ - '"service_name" in first_service'
+ - '"service_type" in first_service'
+ - '"tags" in first_service'
+ - '"vpc_endpoint_policy_supported" in first_service'
+
+ - name: 'Grab single service details to test filters'
+ set_fact:
+ example_service: '{{ services_info.service_details[0] }}'
+
+ - name: 'Limit services by filter'
+ ec2_vpc_endpoint_service_info:
+ filters:
+ service-name: '{{ example_service.service_name }}'
+ register: filtered_service
+
+ - name: 'Verify services'
+ vars:
+ first_service: '{{ filtered_service.service_details[0] }}'
+ assert:
+ that:
+ - filtered_service is successful
+ - filtered_service is not changed
+ - '"service_names" in filtered_service'
+ - filtered_service.service_names | length == 1
+ - '"service_details" in filtered_service'
+ - filtered_service.service_details | length == 1
+ - '"acceptance_required" in first_service'
+ - '"availability_zones" in first_service'
+ - '"base_endpoint_dns_names" in first_service'
+ - '"manages_vpc_endpoints" in first_service'
+ - '"owner" in first_service'
+ - '"private_dns_name" in first_service'
+ - '"private_dns_name_verification_state" in first_service'
+ - '"service_id" in first_service'
+ - '"service_name" in first_service'
+ - '"service_type" in first_service'
+ - '"tags" in first_service'
+ - '"vpc_endpoint_policy_supported" in first_service'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+cloud/aws
+
+ec2_vpc_igw_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+vpc_name: '{{ resource_prefix }}-vpc'
+vpc_seed: '{{ resource_prefix }}'
+vpc_cidr: 10.{{ 256 | random(seed=vpc_seed) }}.0.0/16
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_igw/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,550 @@
+- name: ec2_vpc_igw tests
+ module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ block:
+ # ============================================================
+ - name: Fetch IGWs in check_mode
+ ec2_vpc_igw_info:
+ register: igw_info
+ check_mode: true
+ - name: Assert success
+ assert:
+ that:
+ - igw_info is successful
+ - '"internet_gateways" in igw_info'
+
+ # ============================================================
+ - name: Create a VPC
+ ec2_vpc_net:
+ name: '{{ vpc_name }}'
+ state: present
+ cidr_block: '{{ vpc_cidr }}'
+ tags:
+ Name: '{{ resource_prefix }}-vpc'
+ Description: Created by ansible-test
+ register: vpc_result
+ - name: Assert success
+ assert:
+ that:
+ - vpc_result is successful
+ - '"vpc" in vpc_result'
+ - '"id" in vpc_result.vpc'
+ - vpc_result.vpc.state == 'available'
+ - '"tags" in vpc_result.vpc'
+ - vpc_result.vpc.tags | length == 2
+ - vpc_result.vpc.tags["Name"] == "{{ resource_prefix }}-vpc"
+ - vpc_result.vpc.tags["Description"] == "Created by ansible-test"
+
+ # ============================================================
+ - name: Search for internet gateway by VPC - no matches
+ ec2_vpc_igw_info:
+ filters:
+ attachment.vpc-id: '{{ vpc_result.vpc.id }}'
+ register: igw_info
+
+ - name: Assert success
+ assert:
+ that:
+ - igw_info is successful
+ - '"internet_gateways" in igw_info'
+ - (igw_info.internet_gateways | length) == 0
+
+ # ============================================================
+ - name: Create internet gateway (expected changed=true) - CHECK_MODE
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ register: vpc_igw_create
+ check_mode: yes
+
+ - name: Assert creation would happen (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_create is changed
+
+ - name: Create internet gateway (expected changed=true)
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ register: vpc_igw_create
+
+ - name: Assert creation happened (expected changed=true)
+ assert:
+ that:
+ - vpc_igw_create is changed
+ - vpc_igw_create.gateway_id.startswith("igw-")
+ - vpc_igw_create.vpc_id == vpc_result.vpc.id
+ - '"tags" in vpc_igw_create'
+ - vpc_igw_create.tags | length == 2
+ - vpc_igw_create.tags["tag_one"] == '{{ resource_prefix }} One'
+ - vpc_igw_create.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"gateway_id" in vpc_igw_create'
+
+ # ============================================================
+ - name: Save IDs for later
+ set_fact:
+ igw_id: '{{ vpc_igw_create.gateway_id }}'
+ vpc_id: '{{ vpc_result.vpc.id }}'
+
+ - name: Search for internet gateway by VPC
+ ec2_vpc_igw_info:
+ filters:
+ attachment.vpc-id: '{{ vpc_id }}'
+ register: igw_info
+
+ - name: Check standard IGW details
+ assert:
+ that:
+ - '"internet_gateways" in igw_info'
+ - igw_info.internet_gateways | length == 1
+ - '"attachments" in current_igw'
+ - current_igw.attachments | length == 1
+ - '"state" in current_igw.attachments[0]'
+ - current_igw.attachments[0].state == "available"
+ - '"vpc_id" in current_igw.attachments[0]'
+ - current_igw.attachments[0].vpc_id == vpc_id
+ - '"internet_gateway_id" in current_igw'
+ - current_igw.internet_gateway_id == igw_id
+ - '"tags" in current_igw'
+ - current_igw.tags | length == 2
+ - '"key" in current_igw.tags[0]'
+ - '"value" in current_igw.tags[0]'
+ - '"key" in current_igw.tags[1]'
+ - '"value" in current_igw.tags[1]'
+ # Order isn't guaranteed in boto3 style, so just check the keys and
+ # values we expect are in there.
+ - current_igw.tags[0].key in ["tag_one", "Tag Two"]
+ - current_igw.tags[1].key in ["tag_one", "Tag Two"]
+ - current_igw.tags[0].value in [resource_prefix + " One", "two " + resource_prefix]
+ - current_igw.tags[1].value in [resource_prefix + " One", "two " + resource_prefix]
+ vars:
+ current_igw: '{{ igw_info.internet_gateways[0] }}'
+
+ - name: Fetch IGW by ID
+ ec2_vpc_igw_info:
+ internet_gateway_ids: '{{ igw_id }}'
+ convert_tags: yes
+ register: igw_info
+
+ - name: Check standard IGW details
+ assert:
+ that:
+ - '"internet_gateways" in igw_info'
+ - igw_info.internet_gateways | length == 1
+ - '"attachments" in current_igw'
+ - current_igw.attachments | length == 1
+ - '"state" in current_igw.attachments[0]'
+ - current_igw.attachments[0].state == "available"
+ - '"vpc_id" in current_igw.attachments[0]'
+ - current_igw.attachments[0].vpc_id == vpc_id
+ - '"internet_gateway_id" in current_igw'
+ - current_igw.internet_gateway_id == igw_id
+ - '"tags" in current_igw'
+ - current_igw.tags | length == 2
+ - '"tag_one" in current_igw.tags'
+ - '"Tag Two" in current_igw.tags'
+ - current_igw.tags["tag_one"] == '{{ resource_prefix }} One'
+ - current_igw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ vars:
+ current_igw: '{{ igw_info.internet_gateways[0] }}'
+
+ - name: Fetch IGW by ID (list)
+ ec2_vpc_igw_info:
+ internet_gateway_ids:
+ - '{{ igw_id }}'
+ register: igw_info
+
+ - name: Check standard IGW details
+ assert:
+ that:
+ - '"internet_gateways" in igw_info'
+ - igw_info.internet_gateways | length == 1
+ - '"attachments" in current_igw'
+ - current_igw.attachments | length == 1
+ - '"state" in current_igw.attachments[0]'
+ - current_igw.attachments[0].state == "available"
+ - '"vpc_id" in current_igw.attachments[0]'
+ - current_igw.attachments[0].vpc_id == vpc_id
+ - '"internet_gateway_id" in current_igw'
+ - current_igw.internet_gateway_id == igw_id
+ - '"tags" in current_igw'
+ vars:
+ current_igw: '{{ igw_info.internet_gateways[0] }}'
+
+ - name: Attempt to recreate internet gateway on VPC (expected changed=false) - CHECK_MODE
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ register: vpc_igw_recreate
+ check_mode: yes
+
+ - name: Assert recreation would do nothing (expected changed=false) - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_recreate is not changed
+ - vpc_igw_recreate.gateway_id == igw_id
+ - vpc_igw_recreate.vpc_id == vpc_id
+ - '"tags" in vpc_igw_create'
+ - vpc_igw_create.tags | length == 2
+ - vpc_igw_create.tags["tag_one"] == '{{ resource_prefix }} One'
+ - vpc_igw_create.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+ - name: Attempt to recreate internet gateway on VPC (expected changed=false)
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ register: vpc_igw_recreate
+
+ - name: Assert recreation did nothing (expected changed=false)
+ assert:
+ that:
+ - vpc_igw_recreate is not changed
+ - vpc_igw_recreate.gateway_id == igw_id
+ - vpc_igw_recreate.vpc_id == vpc_id
+ - '"tags" in vpc_igw_create'
+ - vpc_igw_create.tags | length == 2
+ - vpc_igw_create.tags["tag_one"] == '{{ resource_prefix }} One'
+ - vpc_igw_create.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+ # ============================================================
+ - name: Update the tags (no change) - CHECK_MODE
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ register: vpc_igw_recreate
+ check_mode: yes
+
+ - name: Assert tag update would do nothing (expected changed=false) - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_recreate is not changed
+ - vpc_igw_recreate.gateway_id == igw_id
+ - vpc_igw_recreate.vpc_id == vpc_id
+ - '"tags" in vpc_igw_recreate'
+ - vpc_igw_recreate.tags | length == 2
+ - vpc_igw_recreate.tags["tag_one"] == '{{ resource_prefix }} One'
+ - vpc_igw_recreate.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+ - name: Update the tags (no change)
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ register: vpc_igw_recreate
+
+ - name: Assert tag update did nothing (expected changed=false)
+ assert:
+ that:
+ - vpc_igw_recreate is not changed
+ - vpc_igw_recreate.gateway_id == igw_id
+ - vpc_igw_recreate.vpc_id == vpc_id
+ - '"tags" in vpc_igw_recreate'
+ - vpc_igw_recreate.tags | length == 2
+ - vpc_igw_recreate.tags["tag_one"] == '{{ resource_prefix }} One'
+ - vpc_igw_recreate.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+ # ============================================================
+ - name: Update the tags (remove and add) - CHECK_MODE
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ tag_three: '{{ resource_prefix }} Three'
+ Tag Two: two {{ resource_prefix }}
+ register: vpc_igw_update
+ check_mode: yes
+
+ - name: Assert tag update would happen (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_update is changed
+ - vpc_igw_update.gateway_id == igw_id
+ - vpc_igw_update.vpc_id == vpc_id
+ - '"tags" in vpc_igw_update'
+ - vpc_igw_update.tags | length == 2
+
+ - name: Update the tags (remove and add)
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ tag_three: '{{ resource_prefix }} Three'
+ Tag Two: two {{ resource_prefix }}
+ register: vpc_igw_update
+
+ - name: Assert tags are updated (expected changed=true)
+ assert:
+ that:
+ - vpc_igw_update is changed
+ - vpc_igw_update.gateway_id == igw_id
+ - vpc_igw_update.vpc_id == vpc_id
+ - '"tags" in vpc_igw_update'
+ - vpc_igw_update.tags | length == 2
+ - vpc_igw_update.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - vpc_igw_update.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+ # ============================================================
+ - name: Update the tags add without purge - CHECK_MODE
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ purge_tags: no
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ register: vpc_igw_update
+ check_mode: yes
+
+ - name: Assert tags would be added - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_update is changed
+ - vpc_igw_update.gateway_id == igw_id
+ - vpc_igw_update.vpc_id == vpc_id
+ - '"tags" in vpc_igw_update'
+
+ - name: Update the tags add without purge
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ purge_tags: no
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ register: vpc_igw_update
+
+ - name: Assert tags added
+ assert:
+ that:
+ - vpc_igw_update is changed
+ - vpc_igw_update.gateway_id == igw_id
+ - vpc_igw_update.vpc_id == vpc_id
+ - '"tags" in vpc_igw_update'
+ - vpc_igw_update.tags | length == 3
+ - vpc_igw_update.tags["tag_one"] == '{{ resource_prefix }} One'
+ - vpc_igw_update.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - vpc_igw_update.tags["Tag Two"] == 'two {{ resource_prefix }}'
+
+
+ # ============================================================
+ - name: Update with CamelCase tags - CHECK_MODE
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ lowercase spaced: "hello cruel world"
+ Title Case: "Hello Cruel World"
+ CamelCase: "SimpleCamelCase"
+ snake_case: "simple_snake_case"
+ register: vpc_igw_update
+ check_mode: yes
+
+ - name: Assert tag update would happen (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_update is changed
+ - vpc_igw_update.gateway_id == igw_id
+ - vpc_igw_update.vpc_id == vpc_id
+ - '"tags" in vpc_igw_update'
+
+ - name: Update the tags - remove and add
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags:
+ lowercase spaced: "hello cruel world"
+ Title Case: "Hello Cruel World"
+ CamelCase: "SimpleCamelCase"
+ snake_case: "simple_snake_case"
+ register: vpc_igw_update
+
+ - name: assert tags are updated (expected changed=true)
+ assert:
+ that:
+ - vpc_igw_update is changed
+ - vpc_igw_update.gateway_id == igw_id
+ - vpc_igw_update.vpc_id == vpc_id
+ - '"tags" in vpc_igw_update'
+ - vpc_igw_update.tags | length == 4
+ - vpc_igw_update.tags["lowercase spaced"] == 'hello cruel world'
+ - vpc_igw_update.tags["Title Case"] == 'Hello Cruel World'
+ - vpc_igw_update.tags["CamelCase"] == 'SimpleCamelCase'
+ - vpc_igw_update.tags["snake_case"] == 'simple_snake_case'
+
+ # ============================================================
+ - name: Gather information about a filtered list of Internet Gateways using tags
+ ec2_vpc_igw_info:
+ filters:
+ tag:Title Case: "Hello Cruel World"
+ register: igw_info
+
+ - name: Assert success
+ assert:
+ that:
+ - igw_info is successful
+ - '"internet_gateways" in igw_info'
+ - igw_info.internet_gateways | selectattr("internet_gateway_id",'equalto',"{{
+ igw_id }}")
+
+ - name: Gather information about a filtered list of Internet Gateways using tags - CHECK_MODE
+ ec2_vpc_igw_info:
+ filters:
+ tag:Title Case: "Hello Cruel World"
+ register: igw_info
+ check_mode: yes
+
+ - name: Assert success - CHECK_MODE
+ assert:
+ that:
+ - igw_info is successful
+ - '"internet_gateways" in igw_info'
+ - igw_info.internet_gateways | selectattr("internet_gateway_id",'equalto',"{{
+ igw_id }}")
+
+ # ============================================================
+ - name: Gather information about a filtered list of Internet Gateways using tags (no match)
+ ec2_vpc_igw_info:
+ filters:
+ tag:tag_one: '{{ resource_prefix }} One'
+ register: igw_info
+
+ - name: Assert success
+ assert:
+ that:
+ - igw_info is successful
+ - '"internet_gateways" in igw_info'
+ - igw_info.internet_gateways | length == 0
+
+ - name: Gather information about a filtered list of Internet Gateways using tags (no match) - CHECK_MODE
+ ec2_vpc_igw_info:
+ filters:
+ tag:tag_one: '{{ resource_prefix }} One'
+ register: igw_info
+ check_mode: yes
+
+ - name: Assert success - CHECK_MODE
+ assert:
+ that:
+ - igw_info is successful
+ - '"internet_gateways" in igw_info'
+ - igw_info.internet_gateways | length == 0
+
+ # ============================================================
+ - name: Remove all tags - CHECK_MODE
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags: {}
+ register: vpc_igw_update
+ check_mode: yes
+
+ - name: Assert tags would be removed - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_update is changed
+
+ - name: Remove all tags
+ ec2_vpc_igw:
+ state: present
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ tags: {}
+ register: vpc_igw_update
+
+ - name: Assert tags removed
+ assert:
+ that:
+ - vpc_igw_update is changed
+ - vpc_igw_update.gateway_id == igw_id
+ - vpc_igw_update.vpc_id == vpc_id
+ - '"tags" in vpc_igw_update'
+ - vpc_igw_update.tags | length == 0
+
+ # ============================================================
+ - name: Test state=absent (expected changed=true) - CHECK_MODE
+ ec2_vpc_igw:
+ state: absent
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ register: vpc_igw_delete
+ check_mode: yes
+
+ - name: Assert state=absent (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_delete is changed
+
+ - name: Test state=absent (expected changed=true)
+ ec2_vpc_igw:
+ state: absent
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ register: vpc_igw_delete
+
+ - name: Assert state=absent (expected changed=true)
+ assert:
+ that:
+ - vpc_igw_delete is changed
+
+ # ============================================================
+ - name: Fetch IGW by ID (list)
+ ec2_vpc_igw_info:
+ internet_gateway_ids:
+ - '{{ igw_id }}'
+ register: igw_info
+ ignore_errors: true
+
+ - name: Check IGW does not exist
+ assert:
+ that:
+ # Deliberate choice not to change bevahiour when searching by ID
+ - igw_info is failed
+
+ # ============================================================
+ - name: Test state=absent when already deleted (expected changed=false) - CHECK_MODE
+ ec2_vpc_igw:
+ state: absent
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ register: vpc_igw_delete
+ check_mode: yes
+
+ - name: Assert state=absent (expected changed=false) - CHECK_MODE
+ assert:
+ that:
+ - vpc_igw_delete is not changed
+
+ - name: Test state=absent when already deleted (expected changed=false)
+ ec2_vpc_igw:
+ state: absent
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ register: vpc_igw_delete
+
+ - name: Assert state=absent (expected changed=false)
+ assert:
+ that:
+ - vpc_igw_delete is not changed
+
+ always:
+ # ============================================================
+ - name: Tidy up IGW
+ ec2_vpc_igw:
+ state: absent
+ vpc_id: '{{ vpc_result.vpc.id }}'
+ ignore_errors: true
+
+ - name: Tidy up VPC
+ ec2_vpc_net:
+ name: '{{ vpc_name }}'
+ state: absent
+ cidr_block: '{{ vpc_cidr }}'
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+cloud/aws
+ec2_vpc_nat_gateway_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,4 @@
+vpc_name: '{{ resource_prefix }}-vpc'
+vpc_seed: '{{ resource_prefix }}'
+vpc_cidr: 10.0.0.0/16
+subnet_cidr: 10.0.{{ 256 | random(seed=vpc_seed) }}.0/24
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_nat_gateway/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,930 @@
+- name: ec2_vpc_nat_gateway tests
+ module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ block:
+ # ============================================================
+ - name: Create a VPC
+ ec2_vpc_net:
+ name: '{{ vpc_name }}'
+ state: present
+ cidr_block: '{{ vpc_cidr }}'
+ register: vpc_result
+
+ - name: Assert success
+ assert:
+ that:
+ - vpc_result is successful
+ - '"vpc" in vpc_result'
+ - '"cidr_block" in vpc_result.vpc'
+ - vpc_result.vpc.cidr_block == vpc_cidr
+ - '"id" in vpc_result.vpc'
+ - vpc_result.vpc.id.startswith("vpc-")
+ - '"state" in vpc_result.vpc'
+ - vpc_result.vpc.state == 'available'
+ - '"tags" in vpc_result.vpc'
+
+ - name: 'Set fact: VPC ID'
+ set_fact:
+ vpc_id: '{{ vpc_result.vpc.id }}'
+
+
+ # ============================================================
+ - name: Allocate a new EIP
+ ec2_eip:
+ in_vpc: true
+ reuse_existing_ip_allowed: true
+ tag_name: FREE
+ register: eip_result
+
+ - name: Assert success
+ assert:
+ that:
+ - eip_result is successful
+ - '"allocation_id" in eip_result'
+ - eip_result.allocation_id.startswith("eipalloc-")
+ - '"public_ip" in eip_result'
+
+ - name: 'set fact: EIP allocation ID and EIP public IP'
+ set_fact:
+ eip_address: '{{ eip_result.public_ip }}'
+ allocation_id: '{{ eip_result.allocation_id }}'
+
+
+ # ============================================================
+ - name: Create subnet and associate to the VPC
+ ec2_vpc_subnet:
+ state: present
+ vpc_id: '{{ vpc_id }}'
+ cidr: '{{ subnet_cidr }}'
+ register: subnet_result
+
+ - name: Assert success
+ assert:
+ that:
+ - subnet_result is successful
+ - '"subnet" in subnet_result'
+ - '"cidr_block" in subnet_result.subnet'
+ - subnet_result.subnet.cidr_block == subnet_cidr
+ - '"id" in subnet_result.subnet'
+ - subnet_result.subnet.id.startswith("subnet-")
+ - '"state" in subnet_result.subnet'
+ - subnet_result.subnet.state == 'available'
+ - '"tags" in subnet_result.subnet'
+ - subnet_result.subnet.vpc_id == vpc_id
+
+ - name: 'set fact: VPC subnet ID'
+ set_fact:
+ subnet_id: '{{ subnet_result.subnet.id }}'
+
+
+ # ============================================================
+ - name: Search for NAT gateways by subnet (no matches) - CHECK_MODE
+ ec2_vpc_nat_gateway_info:
+ filters:
+ subnet-id: '{{ subnet_id }}'
+ state: [available]
+ register: existing_ngws
+ check_mode: yes
+
+ - name: Assert no NAT gateway found - CHECK_MODE
+ assert:
+ that:
+ - existing_ngws is successful
+ - (existing_ngws.result|length) == 0
+
+ - name: Search for NAT gateways by subnet - no matches
+ ec2_vpc_nat_gateway_info:
+ filters:
+ subnet-id: '{{ subnet_id }}'
+ state: [available]
+ register: existing_ngws
+
+ - name: Assert no NAT gateway found
+ assert:
+ that:
+ - existing_ngws is successful
+ - (existing_ngws.result|length) == 0
+
+
+ # ============================================================
+ - name: Create IGW
+ ec2_vpc_igw:
+ vpc_id: '{{ vpc_id }}'
+ register: create_igw
+
+ - name: Assert success
+ assert:
+ that:
+ - create_igw is successful
+ - create_igw.gateway_id.startswith("igw-")
+ - create_igw.vpc_id == vpc_id
+ - '"gateway_id" in create_igw'
+
+
+ # ============================================================
+ - name: Create new NAT gateway with eip allocation-id - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ wait: yes
+ register: create_ngw
+ check_mode: yes
+
+ - name: Assert creation happened (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - create_ngw.changed
+
+ - name: Create new NAT gateway with eip allocation-id
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ wait: yes
+ register: create_ngw
+
+ - name: Assert creation happened (expected changed=true)
+ assert:
+ that:
+ - create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+ - name: 'set facts: NAT gateway ID'
+ set_fact:
+ nat_gateway_id: '{{ create_ngw.nat_gateway_id }}'
+ network_interface_id: '{{ create_ngw.nat_gateway_addresses[0].network_interface_id }}'
+
+
+ # ============================================================
+ - name: Get NAT gateway with specific filters (state and subnet)
+ ec2_vpc_nat_gateway_info:
+ filters:
+ subnet-id: '{{ subnet_id }}'
+ state: [available]
+ register: avalaible_ngws
+
+ - name: Assert success
+ assert:
+ that:
+ - avalaible_ngws is successful
+ - avalaible_ngws.result | length == 1
+ - '"create_time" in first_ngw'
+ - '"nat_gateway_addresses" in first_ngw'
+ - '"nat_gateway_id" in first_ngw'
+ - first_ngw.nat_gateway_id == nat_gateway_id
+ - '"state" in first_ngw'
+ - first_ngw.state == 'available'
+ - '"subnet_id" in first_ngw'
+ - first_ngw.subnet_id == subnet_id
+ - '"tags" in first_ngw'
+ - '"vpc_id" in first_ngw'
+ - first_ngw.vpc_id == vpc_id
+ vars:
+ first_ngw: '{{ avalaible_ngws.result[0] }}'
+
+ # ============================================================
+ - name: Trying this again for idempotency - create new NAT gateway with eip allocation-id
+ - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ wait: yes
+ register: create_ngw
+ check_mode: yes
+
+ - name: Assert recreation would do nothing (expected changed=false) - CHECK_MODE
+ assert:
+ that:
+ - not create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+ - name: Trying this again for idempotency - create new NAT gateway with eip allocation-id
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ wait: yes
+ register: create_ngw
+
+ - name: Assert recreation would do nothing (expected changed=false)
+ assert:
+ that:
+ - not create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Create new NAT gateway only if one does not exist already - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ subnet_id: '{{ subnet_id }}'
+ wait: yes
+ register: create_ngw
+ check_mode: yes
+
+ - name: Assert recreation would do nothing (expected changed=false) - CHECK_MODE
+ assert:
+ that:
+ - not create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+ - name: Create new NAT gateway only if one does not exist already
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ subnet_id: '{{ subnet_id }}'
+ wait: yes
+ register: create_ngw
+
+ - name: Assert recreation would do nothing (expected changed=false)
+ assert:
+ that:
+ - not create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Allocate a new EIP
+ ec2_eip:
+ in_vpc: true
+ reuse_existing_ip_allowed: true
+ tag_name: FREE
+ register: eip_result
+
+ - name: Assert success
+ assert:
+ that:
+ - eip_result is successful
+ - '"allocation_id" in eip_result'
+ - eip_result.allocation_id.startswith("eipalloc-")
+ - '"public_ip" in eip_result'
+
+ - name: 'Set fact: EIP allocation ID and EIP public IP'
+ set_fact:
+ second_eip_address: '{{ eip_result.public_ip }}'
+ second_allocation_id: '{{ eip_result.allocation_id }}'
+
+
+ # ============================================================
+ - name: Create new nat gateway with eip address - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ eip_address: '{{ second_eip_address }}'
+ wait: yes
+ register: create_ngw
+ check_mode: yes
+
+ - name: Assert creation happened (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - create_ngw.changed
+
+ - name: Create new NAT gateway with eip address
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ eip_address: '{{ second_eip_address }}'
+ wait: yes
+ register: create_ngw
+
+ - name: Assert creation happened (expected changed=true)
+ assert:
+ that:
+ - create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == second_allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Trying this again for idempotency - create new NAT gateway with eip address - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ eip_address: '{{ second_eip_address }}'
+ wait: yes
+ register: create_ngw
+ check_mode: yes
+
+ - name: Assert recreation would do nothing (expected changed=false) - CHECK_MODE
+ assert:
+ that:
+ - not create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == second_allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+ - name: Trying this again for idempotency - create new NAT gateway with eip address
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ eip_address: '{{ second_eip_address }}'
+ wait: yes
+ register: create_ngw
+
+ - name: Assert recreation would do nothing (expected changed=false)
+ assert:
+ that:
+ - not create_ngw.changed
+ - '"create_time" in create_ngw'
+ - '"nat_gateway_addresses" in create_ngw'
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == second_allocation_id
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Fetch NAT gateway by ID (list)
+ ec2_vpc_nat_gateway_info:
+ nat_gateway_ids:
+ - '{{ nat_gateway_id }}'
+ register: ngw_info
+
+ - name: Check NAT gateway exists
+ assert:
+ that:
+ - ngw_info is successful
+ - ngw_info.result | length == 1
+ - '"create_time" in first_ngw'
+ - '"nat_gateway_addresses" in first_ngw'
+ - '"nat_gateway_id" in first_ngw'
+ - first_ngw.nat_gateway_id == nat_gateway_id
+ - '"state" in first_ngw'
+ - first_ngw.state == 'available'
+ - '"subnet_id" in first_ngw'
+ - first_ngw.subnet_id == subnet_id
+ - '"tags" in first_ngw'
+ - '"vpc_id" in first_ngw'
+ - first_ngw.vpc_id == vpc_id
+ vars:
+ first_ngw: '{{ ngw_info.result[0] }}'
+
+
+ # ============================================================
+ - name: Delete NAT gateway - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ nat_gateway_id: '{{ nat_gateway_id }}'
+ state: absent
+ wait: yes
+ register: delete_nat_gateway
+ check_mode: yes
+
+ - name: Assert state=absent (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - delete_nat_gateway.changed
+
+ - name: Delete NAT gateway
+ ec2_vpc_nat_gateway:
+ nat_gateway_id: '{{ nat_gateway_id }}'
+ state: absent
+ wait: yes
+ register: delete_nat_gateway
+
+ - name: Assert state=absent (expected changed=true)
+ assert:
+ that:
+ - delete_nat_gateway.changed
+ - '"delete_time" in delete_nat_gateway'
+ - '"nat_gateway_addresses" in delete_nat_gateway'
+ - '"nat_gateway_id" in delete_nat_gateway'
+ - delete_nat_gateway.nat_gateway_id == nat_gateway_id
+ - '"state" in delete_nat_gateway'
+ - delete_nat_gateway.state in ['deleted', 'deleting']
+ - '"subnet_id" in delete_nat_gateway'
+ - delete_nat_gateway.subnet_id == subnet_id
+ - '"tags" in delete_nat_gateway'
+ - '"vpc_id" in delete_nat_gateway'
+ - delete_nat_gateway.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Create new NAT gateway with eip allocation-id and tags - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ wait: yes
+ register: create_ngw
+ check_mode: yes
+
+ - name: Assert creation happened (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - create_ngw.changed
+
+ - name: Create new NAT gateway with eip allocation-id and tags
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ wait: yes
+ register: create_ngw
+
+ - name: Assert creation happened (expected changed=true)
+ assert:
+ that:
+ - create_ngw.changed
+ - '"create_time" in create_ngw'
+ - create_ngw.nat_gateway_addresses[0].allocation_id == allocation_id
+ - '"nat_gateway_id" in create_ngw'
+ - create_ngw.nat_gateway_id.startswith("nat-")
+ - '"state" in create_ngw'
+ - create_ngw.state == 'available'
+ - '"subnet_id" in create_ngw'
+ - create_ngw.subnet_id == subnet_id
+ - '"tags" in create_ngw'
+ - create_ngw.tags | length == 2
+ - create_ngw.tags["tag_one"] == '{{ resource_prefix }} One'
+ - create_ngw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"vpc_id" in create_ngw'
+ - create_ngw.vpc_id == vpc_id
+
+ - name: 'Set facts: NAT gateway ID'
+ set_fact:
+ ngw_id: '{{ create_ngw.nat_gateway_id }}'
+
+
+ # ============================================================
+ - name: Update the tags (no change) - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ wait: yes
+ register: update_tags_ngw
+ check_mode: yes
+
+ - name: Assert tag update would do nothing (expected changed=false) - CHECK_MODE
+ assert:
+ that:
+ - not update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - update_tags_ngw.tags | length == 2
+ - update_tags_ngw.tags["tag_one"] == '{{ resource_prefix }} One'
+ - update_tags_ngw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+ - name: Update the tags (no change)
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ Tag Two: two {{ resource_prefix }}
+ wait: yes
+ register: update_tags_ngw
+
+ - name: Assert tag update would do nothing (expected changed=false)
+ assert:
+ that:
+ - not update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - update_tags_ngw.tags | length == 2
+ - update_tags_ngw.tags["tag_one"] == '{{ resource_prefix }} One'
+ - update_tags_ngw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Gather information about a filtered list of NAT Gateways using tags and state - CHECK_MODE
+ ec2_vpc_nat_gateway_info:
+ filters:
+ tag:Tag Two: two {{ resource_prefix }}
+ state: [available]
+ register: ngw_info
+ check_mode: yes
+
+ - name: Assert success - CHECK_MODE
+ assert:
+ that:
+ - ngw_info is successful
+ - ngw_info.result | length == 1
+ - '"create_time" in second_ngw'
+ - '"nat_gateway_addresses" in second_ngw'
+ - '"nat_gateway_id" in second_ngw'
+ - second_ngw.nat_gateway_id == ngw_id
+ - '"state" in second_ngw'
+ - second_ngw.state == 'available'
+ - '"subnet_id" in second_ngw'
+ - second_ngw.subnet_id == subnet_id
+ - '"tags" in second_ngw'
+ - second_ngw.tags | length == 2
+ - '"tag_one" in second_ngw.tags'
+ - '"Tag Two" in second_ngw.tags'
+ - second_ngw.tags["tag_one"] == '{{ resource_prefix }} One'
+ - second_ngw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"vpc_id" in second_ngw'
+ - second_ngw.vpc_id == vpc_id
+ vars:
+ second_ngw: '{{ ngw_info.result[0] }}'
+
+ - name: Gather information about a filtered list of NAT Gateways using tags and state
+ ec2_vpc_nat_gateway_info:
+ filters:
+ tag:Tag Two: two {{ resource_prefix }}
+ state: [available]
+ register: ngw_info
+
+ - name: Assert success
+ assert:
+ that:
+ - ngw_info is successful
+ - ngw_info.result | length == 1
+ - '"create_time" in second_ngw'
+ - '"nat_gateway_addresses" in second_ngw'
+ - '"nat_gateway_id" in second_ngw'
+ - second_ngw.nat_gateway_id == ngw_id
+ - '"state" in second_ngw'
+ - second_ngw.state == 'available'
+ - '"subnet_id" in second_ngw'
+ - second_ngw.subnet_id == subnet_id
+ - '"tags" in second_ngw'
+ - second_ngw.tags | length == 2
+ - '"tag_one" in second_ngw.tags'
+ - '"Tag Two" in second_ngw.tags'
+ - second_ngw.tags["tag_one"] == '{{ resource_prefix }} One'
+ - second_ngw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"vpc_id" in second_ngw'
+ - second_ngw.vpc_id == vpc_id
+ vars:
+ second_ngw: '{{ ngw_info.result[0] }}'
+
+
+ # ============================================================
+ - name: Update the tags (remove and add) - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags:
+ tag_three: '{{ resource_prefix }} Three'
+ Tag Two: two {{ resource_prefix }}
+ wait: yes
+ register: update_tags_ngw
+ check_mode: yes
+
+ - name: Assert tag update would happen (expected changed=true) - CHECK_MODE
+ assert:
+ that:
+ - update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+ - name: Update the tags (remove and add)
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags:
+ tag_three: '{{ resource_prefix }} Three'
+ Tag Two: two {{ resource_prefix }}
+ wait: yes
+ register: update_tags_ngw
+
+ - name: Assert tag update would happen (expected changed=true)
+ assert:
+ that:
+ - update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - update_tags_ngw.tags | length == 2
+ - update_tags_ngw.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - update_tags_ngw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Gather information about a filtered list of NAT Gateways using tags and state (no match) - CHECK_MODE
+ ec2_vpc_nat_gateway_info:
+ filters:
+ tag:tag_one: '{{ resource_prefix }} One'
+ state: [available]
+ register: ngw_info
+ check_mode: yes
+
+ - name: Assert success - CHECK_MODE
+ assert:
+ that:
+ - ngw_info is successful
+ - ngw_info.result | length == 0
+
+ - name: Gather information about a filtered list of NAT Gateways using tags and
+ state (no match)
+ ec2_vpc_nat_gateway_info:
+ filters:
+ tag:tag_one: '{{ resource_prefix }} One'
+ state: [available]
+ register: ngw_info
+
+ - name: Assert success
+ assert:
+ that:
+ - ngw_info is successful
+ - ngw_info.result | length == 0
+
+
+ # ============================================================
+ - name: Update the tags add without purge - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ purge_tags: no
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ wait: yes
+ register: update_tags_ngw
+ check_mode: yes
+
+ - name: Assert tags would be added - CHECK_MODE
+ assert:
+ that:
+ - update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+ - name: Update the tags add without purge
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ purge_tags: no
+ tags:
+ tag_one: '{{ resource_prefix }} One'
+ wait: yes
+ register: update_tags_ngw
+
+ - name: Assert tags would be added
+ assert:
+ that:
+ - update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - update_tags_ngw.tags | length == 3
+ - update_tags_ngw.tags["tag_one"] == '{{ resource_prefix }} One'
+ - update_tags_ngw.tags["tag_three"] == '{{ resource_prefix }} Three'
+ - update_tags_ngw.tags["Tag Two"] == 'two {{ resource_prefix }}'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Remove all tags - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags: {}
+ register: delete_tags_ngw
+ check_mode: yes
+
+ - name: assert tags would be removed - CHECK_MODE
+ assert:
+ that:
+ - delete_tags_ngw.changed
+ - '"nat_gateway_id" in delete_tags_ngw'
+ - delete_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in delete_tags_ngw'
+ - delete_tags_ngw.subnet_id == subnet_id
+ - '"tags" in delete_tags_ngw'
+ - '"vpc_id" in delete_tags_ngw'
+ - delete_tags_ngw.vpc_id == vpc_id
+
+ - name: Remove all tags
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ tags: {}
+ register: delete_tags_ngw
+
+ - name: Assert tags would be removed
+ assert:
+ that:
+ - delete_tags_ngw.changed
+ - '"nat_gateway_id" in delete_tags_ngw'
+ - delete_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in delete_tags_ngw'
+ - delete_tags_ngw.subnet_id == subnet_id
+ - '"tags" in delete_tags_ngw'
+ - delete_tags_ngw.tags | length == 0
+ - '"vpc_id" in delete_tags_ngw'
+ - delete_tags_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ - name: Update with CamelCase tags - CHECK_MODE
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ purge_tags: no
+ tags:
+ lowercase spaced: "hello cruel world"
+ Title Case: "Hello Cruel World"
+ CamelCase: "SimpleCamelCase"
+ snake_case: "simple_snake_case"
+ wait: yes
+ register: update_tags_ngw
+ check_mode: yes
+
+ - name: Assert tags would be added - CHECK_MODE
+ assert:
+ that:
+ - update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+ - name: Update with CamelCase tags
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ subnet_id: '{{ subnet_id }}'
+ allocation_id: '{{ allocation_id }}'
+ purge_tags: no
+ tags:
+ lowercase spaced: "hello cruel world"
+ Title Case: "Hello Cruel World"
+ CamelCase: "SimpleCamelCase"
+ snake_case: "simple_snake_case"
+ wait: yes
+ register: update_tags_ngw
+
+ - name: Assert tags would be added
+ assert:
+ that:
+ - update_tags_ngw.changed
+ - '"nat_gateway_id" in update_tags_ngw'
+ - update_tags_ngw.nat_gateway_id == ngw_id
+ - '"subnet_id" in update_tags_ngw'
+ - update_tags_ngw.subnet_id == subnet_id
+ - '"tags" in update_tags_ngw'
+ - update_tags_ngw.tags | length == 4
+ - update_tags_ngw.tags["lowercase spaced"] == 'hello cruel world'
+ - update_tags_ngw.tags["Title Case"] == 'Hello Cruel World'
+ - update_tags_ngw.tags["CamelCase"] == 'SimpleCamelCase'
+ - update_tags_ngw.tags["snake_case"] == 'simple_snake_case'
+ - '"vpc_id" in update_tags_ngw'
+ - update_tags_ngw.vpc_id == vpc_id
+
+
+ # ============================================================
+ always:
+ - name: Get NAT gateways
+ ec2_vpc_nat_gateway_info:
+ filters:
+ vpc-id: '{{ vpc_id }}'
+ state: [available]
+ register: existing_ngws
+ ignore_errors: true
+
+ - name: Tidy up NAT gateway
+ ec2_vpc_nat_gateway:
+ subnet_id: '{{ item.subnet_id }}'
+ nat_gateway_id: '{{ item.nat_gateway_id }}'
+ release_eip: yes
+ state: absent
+ wait: yes
+ with_items: '{{ existing_ngws.result }}'
+ ignore_errors: true
+
+ - name: Delete IGW
+ ec2_vpc_igw:
+ vpc_id: '{{ vpc_id }}'
+ state: absent
+ ignore_errors: true
+
+ - name: Remove subnet
+ ec2_vpc_subnet:
+ state: absent
+ cidr: '{{ subnet_cidr }}'
+ vpc_id: '{{ vpc_id }}'
+ ignore_errors: true
+
+ - name: Ensure EIP is actually released
+ ec2_eip:
+ state: absent
+ device_id: '{{ item.nat_gateway_addresses[0].network_interface_id }}'
+ in_vpc: yes
+ with_items: '{{ existing_ngws.result }}'
+ ignore_errors: yes
+
+ - name: Delete VPC
+ ec2_vpc_net:
+ name: '{{ vpc_name }}'
+ cidr_block: '{{ vpc_cidr }}'
+ state: absent
+ purge_cidrs: yes
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,2 @@
ec2_vpc_net_info
cloud/aws
-shippable/aws/group1
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_net/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -28,24 +28,6 @@
# ============================================================
- - name: attempt to create a VPC without providing connnection information
- module_defaults: { group/aws: {} }
- ec2_vpc_net:
- state: present
- cidr_block: "{{ vpc_cidr }}"
- name: "{{ resource_prefix }}"
- region: us-east-1
- ignore_errors: yes
- register: result
-
- - name: assert connection failure
- assert:
- that:
- - result is failed
- - '"Unable to locate credentials" in result.msg'
-
- # ============================================================
-
- name: Fetch existing VPC info
ec2_vpc_net_info:
filters:
@@ -174,7 +156,7 @@
- name: Test that our new VPC shows up in the results
assert:
that:
- - vpc_1 in ( vpc_info | community.general.json_query("vpcs[].vpc_id") | list )
+ - vpc_1 in ( vpc_info.vpcs | map(attribute="vpc_id") | list )
- name: VPC info (Simple tag filter)
ec2_vpc_net_info:
@@ -223,7 +205,6 @@
- name: Assert no changes made
assert:
that:
- - '"Only one IPv6 CIDR is permitted per VPC, {{ result.vpc.id }} already has CIDR {{ vpc_1_ipv6_cidr }}" in result.warnings'
- result is not changed
- vpc_info.vpcs | length == 1
@@ -789,17 +770,17 @@
# - result.vpc.id == vpc_1
# - vpc_info.vpcs | length == 1
# - vpc_info.vpcs[0].cidr_block == vpc_cidr
- # - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_a not in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_b not in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ # - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_a not in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_b not in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
# - vpc_info.vpcs[0].cidr_block_association_set | length == 1
# - vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
# - vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
# - vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
# - vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- # - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_a not in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_b not in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ # - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_a not in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_b not in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR
ec2_vpc_net:
@@ -828,17 +809,17 @@
- result.vpc.cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b not in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b not in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 2
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b not in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b not in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR (no change)
ec2_vpc_net:
@@ -867,17 +848,17 @@
- result.vpc.cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b not in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b not in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 2
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b not in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b not in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
# #62678
#- name: modify CIDR - no purge (check mode)
@@ -901,17 +882,17 @@
# - result is changed
# - vpc_info.vpcs | length == 1
# - vpc_info.vpcs[0].cidr_block == vpc_cidr
- # - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_b not in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ # - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_b not in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
# - vpc_info.vpcs[0].cidr_block_association_set | length == 2
# - vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
# - vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
# - vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
# - vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- # - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_b not in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ # - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_b not in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR - no purge
ec2_vpc_net:
@@ -942,9 +923,9 @@
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 3
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
@@ -952,9 +933,9 @@
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR - no purge (no change)
ec2_vpc_net:
@@ -984,9 +965,9 @@
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 3
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
@@ -994,9 +975,9 @@
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR - no purge (no change - list all - check mode)
ec2_vpc_net:
@@ -1027,9 +1008,9 @@
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 3
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
@@ -1037,9 +1018,9 @@
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR - no purge (no change - list all)
ec2_vpc_net:
@@ -1070,9 +1051,9 @@
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 3
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
@@ -1080,9 +1061,9 @@
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR - no purge (no change - different order - check mode)
ec2_vpc_net:
@@ -1113,9 +1094,9 @@
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 3
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
@@ -1123,9 +1104,9 @@
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR - no purge (no change - different order)
ec2_vpc_net:
@@ -1156,9 +1137,9 @@
- result.vpc.cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- result.vpc.cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | map(attribute="cidr_block") | list)
- vpc_info.vpcs[0].cidr_block_association_set | length == 3
- vpc_info.vpcs[0].cidr_block_association_set[0].association_id.startswith("vpc-cidr-assoc-")
- vpc_info.vpcs[0].cidr_block_association_set[1].association_id.startswith("vpc-cidr-assoc-")
@@ -1166,9 +1147,9 @@
- vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
- vpc_info.vpcs[0].cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
# #62678
#- name: modify CIDR - purge (check mode)
@@ -1200,9 +1181,9 @@
# - vpc_info.vpcs[0].cidr_block_association_set[0].cidr_block_state.state in ["associated", "associating"]
# - vpc_info.vpcs[0].cidr_block_association_set[1].cidr_block_state.state in ["associated", "associating"]
# - vpc_info.vpcs[0].cidr_block_association_set[2].cidr_block_state.state in ["associated", "associating"]
- # - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_a in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
- # - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query("cidr_block_association_set[*].cidr_block") | list)
+ # - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_a in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
+ # - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | map(attribute="cidr_block") | list)
- name: modify CIDR - purge
ec2_vpc_net:
@@ -1219,8 +1200,6 @@
register: vpc_info
- name: assert the CIDRs changed
- vars:
- cidr_query: 'cidr_block_association_set[?cidr_block_state.state == `associated`].cidr_block'
assert:
that:
- result is successful
@@ -1229,14 +1208,14 @@
- vpc_info.vpcs | length == 1
- result.vpc.cidr_block == vpc_cidr
- vpc_info.vpcs[0].cidr_block == vpc_cidr
- - result.vpc | community.general.json_query(cidr_query) | list | length == 2
- - vpc_cidr in (result.vpc | community.general.json_query(cidr_query) | list)
- - vpc_cidr_a not in (result.vpc | community.general.json_query(cidr_query) | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query(cidr_query) | list)
- - vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list | length == 2
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list)
- - vpc_cidr_a not in (vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list)
+ - result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list | length == 2
+ - vpc_cidr in (result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_cidr_a not in (result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block'))
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block'))
+ - vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list | length == 2
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_cidr_a not in (vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
- name: modify CIDR - purge (no change)
ec2_vpc_net:
@@ -1253,8 +1232,6 @@
register: vpc_info
- name: assert the CIDRs didn't change
- vars:
- cidr_query: 'cidr_block_association_set[?cidr_block_state.state == `associated`].cidr_block'
assert:
that:
- result is successful
@@ -1263,14 +1240,14 @@
- vpc_info.vpcs | length == 1
- result.vpc.cidr_block == vpc_cidr
- vpc_info.vpcs[0].cidr_block == vpc_cidr
- - result.vpc | community.general.json_query(cidr_query) | list | length == 2
- - vpc_cidr in (result.vpc | community.general.json_query(cidr_query) | list)
- - vpc_cidr_a not in (result.vpc | community.general.json_query(cidr_query) | list)
- - vpc_cidr_b in (result.vpc | community.general.json_query(cidr_query) | list)
- - vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list | length == 2
- - vpc_cidr in (vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list)
- - vpc_cidr_a not in (vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list)
- - vpc_cidr_b in (vpc_info.vpcs[0] | community.general.json_query(cidr_query) | list)
+ - result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list | length == 2
+ - vpc_cidr in (result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_cidr_a not in (result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_cidr_b in (result.vpc.cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list | length == 2
+ - vpc_cidr in (vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_cidr_a not in (vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
+ - vpc_cidr_b in (vpc_info.vpcs[0].cidr_block_association_set | selectattr('cidr_block_state.state', 'equalto', 'associated') | map(attribute='cidr_block') | list)
# ============================================================
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+cloud/aws
+
+ec2_vpc_route_table_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+---
+availability_zone_a: '{{ ec2_availability_zone_names[0] }}'
+availability_zone_b: '{{ ec2_availability_zone_names[1] }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,4 @@
+dependencies:
+- prepare_tests
+- setup_ec2
+- setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_route_table/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,707 @@
+- name: ec2_vpc_route_table integration tests
+ module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+ block:
+
+ - name: create VPC
+ ec2_vpc_net:
+ cidr_block: 10.228.228.0/22
+ name: '{{ resource_prefix }}_vpc'
+ state: present
+ register: vpc
+ - name: create subnets
+ ec2_vpc_subnet:
+ cidr: '{{ item.cidr }}'
+ az: '{{ item.zone }}'
+ vpc_id: '{{ vpc.vpc.id }}'
+ state: present
+ tags:
+ Public: '{{ item.public|string }}'
+ Name: "{{ (item.public|bool)|ternary('public', 'private') }}-{{ item.zone }}"
+ with_items:
+ - cidr: 10.228.228.0/24
+ zone: '{{ availability_zone_a }}'
+ public: 'True'
+ - cidr: 10.228.229.0/24
+ zone: '{{ availability_zone_b }}'
+ public: 'True'
+ - cidr: 10.228.230.0/24
+ zone: '{{ availability_zone_a }}'
+ public: 'False'
+ - cidr: 10.228.231.0/24
+ zone: '{{ availability_zone_b }}'
+ public: 'False'
+ register: subnets
+ - ec2_vpc_subnet_info:
+ filters:
+ vpc-id: '{{ vpc.vpc.id }}'
+ register: vpc_subnets
+ - set_fact:
+ public_subnets: "{{ (vpc_subnets.subnets| selectattr('tags.Public', 'equalto',\
+ \ 'True')| map(attribute='id')| list) }}"
+ public_cidrs: "{{ (vpc_subnets.subnets| selectattr('tags.Public', 'equalto',\
+ \ 'True')| map(attribute='cidr_block')| list) }}"
+ private_subnets: "{{ (vpc_subnets.subnets| selectattr('tags.Public', 'equalto',\
+ \ 'False')| map(attribute='id')| list) }}"
+ - name: create IGW
+ ec2_vpc_igw:
+ vpc_id: '{{ vpc.vpc.id }}'
+ - name: create NAT GW
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ wait: yes
+ subnet_id: '{{ subnets.results[0].subnet.id }}'
+ register: nat_gateway
+ - name: CHECK MODE - route table should be created
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ check_mode: true
+ register: check_mode_results
+ - name: assert that the public route table would be created
+ assert:
+ that:
+ - check_mode_results.changed
+
+ - name: create public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ register: create_public_table
+ - name: assert that public route table has an id
+ assert:
+ that:
+ - create_public_table.changed
+ - create_public_table.route_table.id.startswith('rtb-')
+ - "'Public' in create_public_table.route_table.tags and create_public_table.route_table.tags['Public']\
+ \ == 'true'"
+ - create_public_table.route_table.routes|length == 1
+ - create_public_table.route_table.associations|length == 0
+ - create_public_table.route_table.vpc_id == "{{ vpc.vpc.id }}"
+ - create_public_table.route_table.propagating_vgws|length == 0
+ - create_public_table.route_table.routes|length == 1
+
+ - name: CHECK MODE - route table should already exist
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ check_mode: true
+ register: check_mode_results
+ - name: assert the table already exists
+ assert:
+ that:
+ - not check_mode_results.changed
+
+ - name: recreate public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ register: recreate_public_route_table
+ - name: assert that public route table did not change
+ assert:
+ that:
+ - not recreate_public_route_table.changed
+ - create_public_table.route_table.id.startswith('rtb-')
+ - "'Public' in create_public_table.route_table.tags and create_public_table.route_table.tags['Public']\
+ \ == 'true'"
+ - create_public_table.route_table.routes|length == 1
+ - create_public_table.route_table.associations|length == 0
+ - create_public_table.route_table.vpc_id == "{{ vpc.vpc.id }}"
+ - create_public_table.route_table.propagating_vgws|length == 0
+ - create_public_table.route_table.routes|length == 1
+
+ - name: CHECK MODE - add route to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ check_mode: true
+ register: check_mode_results
+ - name: assert a route would be added
+ assert:
+ that:
+ - check_mode_results.changed
+
+ - name: add a route to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ register: add_routes
+ - name: assert route table contains new route
+ assert:
+ that:
+ - add_routes.changed
+ - add_routes.route_table.routes|length == 2
+ - add_routes.route_table.id.startswith('rtb-')
+ - "'Public' in add_routes.route_table.tags and add_routes.route_table.tags['Public']\
+ \ == 'true'"
+ - add_routes.route_table.routes|length == 2
+ - add_routes.route_table.associations|length == 0
+ - add_routes.route_table.vpc_id == "{{ vpc.vpc.id }}"
+ - add_routes.route_table.propagating_vgws|length == 0
+
+ - name: CHECK MODE - re-add route to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ check_mode: true
+ register: check_mode_results
+ - name: assert a route would not be added
+ assert:
+ that:
+ - check_mode_results is not changed
+
+ - name: re-add a route to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ register: add_routes
+ - name: assert route table contains route
+ assert:
+ that:
+ - add_routes is not changed
+ - add_routes.route_table.routes|length == 2
+
+ - name: CHECK MODE - add subnets to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: '{{ public_subnets }}'
+ check_mode: true
+ register: check_mode_results
+ - name: assert the subnets would be added to the route table
+ assert:
+ that:
+ - check_mode_results.changed
+
+ - name: add subnets to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: '{{ public_subnets }}'
+ register: add_subnets
+ - name: assert route table contains subnets
+ assert:
+ that:
+ - add_subnets.changed
+ - add_subnets.route_table.associations|length == 2
+
+ - name: add a route to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ register: add_routes
+ - name: CHECK MODE - no routes but purge_routes set to false
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ purge_routes: no
+ subnets: '{{ public_subnets }}'
+ check_mode: true
+ register: check_mode_results
+ - name: assert no routes would be removed
+ assert:
+ that:
+ - not check_mode_results.changed
+
+ - name: rerun with purge_routes set to false
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ purge_routes: no
+ subnets: '{{ public_subnets }}'
+ register: no_purge_routes
+ - name: assert route table still has routes
+ assert:
+ that:
+ - not no_purge_routes.changed
+ - no_purge_routes.route_table.routes|length == 2
+ - no_purge_routes.route_table.associations|length == 2
+
+ - name: rerun with purge_subnets set to false
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ purge_subnets: no
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ register: no_purge_subnets
+ - name: assert route table still has subnets
+ assert:
+ that:
+ - not no_purge_subnets.changed
+ - no_purge_subnets.route_table.routes|length == 2
+ - no_purge_subnets.route_table.associations|length == 2
+
+ - name: rerun with purge_tags not set (implicitly false)
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ lookup: id
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ subnets: '{{ public_subnets }}'
+ register: no_purge_tags
+ - name: assert route table still has tags
+ assert:
+ that:
+ - not no_purge_tags.changed
+ - "'Public' in no_purge_tags.route_table.tags and no_purge_tags.route_table.tags['Public']\
+ \ == 'true'"
+
+ - name: CHECK MODE - purge subnets
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: []
+ tags:
+ Public: 'true'
+ Name: Public route table
+ check_mode: true
+ register: check_mode_results
+ - name: assert subnets would be removed
+ assert:
+ that:
+ - check_mode_results.changed
+
+ - name: purge subnets
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: []
+ tags:
+ Public: 'true'
+ Name: Public route table
+ register: purge_subnets
+ - name: assert purge subnets worked
+ assert:
+ that:
+ - purge_subnets.changed
+ - purge_subnets.route_table.associations|length == 0
+ - purge_subnets.route_table.id == create_public_table.route_table.id
+
+ - name: CHECK MODE - purge routes
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes: []
+ check_mode: true
+ register: check_mode_results
+ - name: assert routes would be removed
+ assert:
+ that:
+ - check_mode_results.changed
+
+ - name: add subnets by cidr to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: '{{ public_cidrs }}'
+ lookup: id
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ register: add_subnets_cidr
+ - name: assert route table contains subnets added by cidr
+ assert:
+ that:
+ - add_subnets_cidr.changed
+ - add_subnets_cidr.route_table.associations|length == 2
+
+ - name: purge subnets added by cidr
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: []
+ lookup: id
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ register: purge_subnets_cidr
+ - name: assert purge subnets added by cidr worked
+ assert:
+ that:
+ - purge_subnets_cidr.changed
+ - purge_subnets_cidr.route_table.associations|length == 0
+
+ - name: add subnets by name to public route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: '{{ public_subnets }}'
+ lookup: id
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ register: add_subnets_name
+ - name: assert route table contains subnets added by name
+ assert:
+ that:
+ - add_subnets_name.changed
+ - add_subnets_name.route_table.associations|length == 2
+
+ - name: purge subnets added by name
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ routes:
+ - dest: 0.0.0.0/0
+ gateway_id: igw
+ subnets: []
+ lookup: id
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ register: purge_subnets_name
+ - name: assert purge subnets added by name worked
+ assert:
+ that:
+ - purge_subnets_name.changed
+ - purge_subnets_name.route_table.associations|length == 0
+
+ - name: purge routes
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'true'
+ Name: Public route table
+ routes: []
+ register: purge_routes
+ - name: assert purge routes worked
+ assert:
+ that:
+ - purge_routes.changed
+ - purge_routes.route_table.routes|length == 1
+ - purge_routes.route_table.id == create_public_table.route_table.id
+
+ - name: CHECK MODE - update tags
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ lookup: id
+ purge_tags: yes
+ tags:
+ Name: Public route table
+ Updated: new_tag
+ check_mode: true
+ register: check_mode_results
+ - name: assert tags would be changed
+ assert:
+ that:
+ - check_mode_results.changed
+
+ - name: update tags
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ lookup: id
+ purge_tags: yes
+ tags:
+ Name: Public route table
+ Updated: new_tag
+ register: update_tags
+ - name: assert update tags worked
+ assert:
+ that:
+ - update_tags.changed
+ - "'Updated' in update_tags.route_table.tags and update_tags.route_table.tags['Updated']\
+ \ == 'new_tag'"
+ - "'Public' not in update_tags.route_table.tags"
+
+ - name: create NAT GW
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ wait: yes
+ subnet_id: '{{ subnets.results[0].subnet.id }}'
+ register: nat_gateway
+ - name: CHECK MODE - create private route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'false'
+ Name: Private route table
+ routes:
+ - gateway_id: '{{ nat_gateway.nat_gateway_id }}'
+ dest: 0.0.0.0/0
+ subnets: '{{ private_subnets }}'
+ check_mode: true
+ register: check_mode_results
+ - name: assert the route table would be created
+ assert:
+ that:
+ - check_mode_results.changed
+
+ - name: create private route table
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'false'
+ Name: Private route table
+ routes:
+ - gateway_id: '{{ nat_gateway.nat_gateway_id }}'
+ dest: 0.0.0.0/0
+ subnets: '{{ private_subnets }}'
+ register: create_private_table
+ - name: assert creating private route table worked
+ assert:
+ that:
+ - create_private_table.changed
+ - create_private_table.route_table.id != create_public_table.route_table.id
+ - "'Public' in create_private_table.route_table.tags"
+
+ - name: CHECK MODE - destroy public route table by tags
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ state: absent
+ tags:
+ Updated: new_tag
+ Name: Public route table
+ check_mode: true
+ register: check_mode_results
+ - name: assert the route table would be deleted
+ assert:
+ that: check_mode_results.changed
+ - name: destroy public route table by tags
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ state: absent
+ tags:
+ Updated: new_tag
+ Name: Public route table
+ register: destroy_table
+ - name: assert destroy table worked
+ assert:
+ that:
+ - destroy_table.changed
+
+ - name: CHECK MODE - redestroy public route table
+ ec2_vpc_route_table:
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ lookup: id
+ state: absent
+ check_mode: true
+ register: check_mode_results
+ - name: assert the public route table does not exist
+ assert:
+ that:
+ - not check_mode_results.changed
+
+ - name: redestroy public route table
+ ec2_vpc_route_table:
+ route_table_id: '{{ create_public_table.route_table.id }}'
+ lookup: id
+ state: absent
+ register: redestroy_table
+ - name: assert redestroy table worked
+ assert:
+ that:
+ - not redestroy_table.changed
+
+ - name: destroy NAT GW
+ ec2_vpc_nat_gateway:
+ state: absent
+ wait: yes
+ release_eip: yes
+ subnet_id: '{{ subnets.results[0].subnet.id }}'
+ nat_gateway_id: '{{ nat_gateway.nat_gateway_id }}'
+ register: nat_gateway
+ - name: show route table info, get table using route-table-id
+ ec2_vpc_route_table_info:
+ filters:
+ route-table-id: '{{ create_private_table.route_table.id }}'
+ register: route_table_info
+ - name: assert route_table_info has correct attributes
+ assert:
+ that:
+ - '"route_tables" in route_table_info'
+ - route_table_info.route_tables | length == 1
+ - '"id" in route_table_info.route_tables[0]'
+ - '"routes" in route_table_info.route_tables[0]'
+ - '"associations" in route_table_info.route_tables[0]'
+ - '"tags" in route_table_info.route_tables[0]'
+ - '"vpc_id" in route_table_info.route_tables[0]'
+ - route_table_info.route_tables[0].id == create_private_table.route_table.id
+ - '"propagating_vgws" in route_table_info.route_tables[0]'
+
+ - name: show route table info, get table using tags
+ ec2_vpc_route_table_info:
+ filters:
+ tag:Public: 'false'
+ tag:Name: Private route table
+ vpc-id: '{{ vpc.vpc.id }}'
+ register: route_table_info
+ - name: assert route_table_info has correct tags
+ assert:
+ that:
+ - route_table_info.route_tables | length == 1
+ - '"tags" in route_table_info.route_tables[0]'
+ - '"Public" in route_table_info.route_tables[0].tags and route_table_info.route_tables[0].tags["Public"]
+ == "false"'
+ - '"Name" in route_table_info.route_tables[0].tags and route_table_info.route_tables[0].tags["Name"]
+ == "Private route table"'
+
+ - name: create NAT GW
+ ec2_vpc_nat_gateway:
+ if_exist_do_not_create: yes
+ wait: yes
+ subnet_id: '{{ subnets.results[0].subnet.id }}'
+ register: nat_gateway
+ - name: show route table info
+ ec2_vpc_route_table_info:
+ filters:
+ route-table-id: '{{ create_private_table.route_table.id }}'
+ - name: recreate private route table with new NAT GW
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'false'
+ Name: Private route table
+ routes:
+ - nat_gateway_id: '{{ nat_gateway.nat_gateway_id }}'
+ dest: 0.0.0.0/0
+ subnets: '{{ private_subnets }}'
+ register: recreate_private_table
+ - name: assert creating private route table worked
+ assert:
+ that:
+ - recreate_private_table.changed
+ - recreate_private_table.route_table.id != create_public_table.route_table.id
+
+ - name: create a VPC endpoint to test ec2_vpc_route_table ignores it
+ ec2_vpc_endpoint:
+ state: present
+ vpc_id: '{{ vpc.vpc.id }}'
+ service: com.amazonaws.{{ aws_region }}.s3
+ route_table_ids:
+ - '{{ recreate_private_table.route_table.route_table_id }}'
+ register: vpc_endpoint
+ - name: purge routes
+ ec2_vpc_route_table:
+ vpc_id: '{{ vpc.vpc.id }}'
+ tags:
+ Public: 'false'
+ Name: Private route table
+ routes:
+ - nat_gateway_id: '{{ nat_gateway.nat_gateway_id }}'
+ dest: 0.0.0.0/0
+ subnets: '{{ private_subnets }}'
+ purge_routes: true
+ register: result
+ - name: Get endpoint infos to verify that it wasn't purged from the route table
+ ec2_vpc_endpoint_info:
+ query: endpoints
+ vpc_endpoint_ids:
+ - '{{ vpc_endpoint.result.vpc_endpoint_id }}'
+ register: endpoint_details
+ - name: assert the route table is associated with the VPC endpoint
+ assert:
+ that:
+ - endpoint_details.vpc_endpoints[0].route_table_ids[0] == recreate_private_table.route_table.route_table_id
+
+ always:
+ #############################################################################
+ # TEAR DOWN STARTS HERE
+ #############################################################################
+ - name: remove the VPC endpoint
+ ec2_vpc_endpoint:
+ state: absent
+ vpc_endpoint_id: '{{ vpc_endpoint.result.vpc_endpoint_id }}'
+ when: vpc_endpoint is defined
+ ignore_errors: yes
+ - name: destroy route tables
+ ec2_vpc_route_table:
+ route_table_id: '{{ item.route_table.id }}'
+ lookup: id
+ state: absent
+ with_items:
+ - '{{ create_public_table|default() }}'
+ - '{{ create_private_table|default() }}'
+ when: item and not item.failed
+ ignore_errors: yes
+ - name: destroy NAT GW
+ ec2_vpc_nat_gateway:
+ state: absent
+ wait: yes
+ release_eip: yes
+ subnet_id: '{{ subnets.results[0].subnet.id }}'
+ nat_gateway_id: '{{ nat_gateway.nat_gateway_id }}'
+ ignore_errors: yes
+ - name: destroy IGW
+ ec2_vpc_igw:
+ vpc_id: '{{ vpc.vpc.id }}'
+ state: absent
+ ignore_errors: yes
+ - name: destroy subnets
+ ec2_vpc_subnet:
+ cidr: '{{ item.cidr }}'
+ vpc_id: '{{ vpc.vpc.id }}'
+ state: absent
+ with_items:
+ - cidr: 10.228.228.0/24
+ - cidr: 10.228.229.0/24
+ - cidr: 10.228.230.0/24
+ - cidr: 10.228.231.0/24
+ ignore_errors: yes
+ - name: destroy VPC
+ ec2_vpc_net:
+ cidr_block: 10.228.228.0/22
+ name: '{{ resource_prefix }}_vpc'
+ state: absent
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,2 @@
cloud/aws
-shippable/aws/group2
-ec2_vpc_subnet_info
\ No newline at end of file
+ec2_vpc_subnet_info
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/defaults/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,6 @@
---
+availability_zone: '{{ ec2_availability_zone_names[0] }}'
+
# defaults file for ec2_vpc_subnet
ec2_vpc_subnet_name: '{{resource_prefix}}'
ec2_vpc_subnet_description: 'Created by ansible integration tests'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/meta/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,3 +1,4 @@
dependencies:
- prepare_tests
- setup_ec2
+ - setup_ec2_facts
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/ec2_vpc_subnet/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -6,15 +6,6 @@
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
-
- - name: list available AZs
- aws_az_info:
- register: region_azs
-
- - name: pick an AZ for testing
- set_fact:
- subnet_az: "{{ region_azs.availability_zones[0].zone_name }}"
-
# ============================================================
- name: create a VPC
ec2_vpc_net:
@@ -48,7 +39,7 @@
- name: create subnet (expected changed=true) (CHECK MODE)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -65,7 +56,7 @@
- name: create subnet (expected changed=true)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -93,7 +84,7 @@
- '"assign_ipv6_address_on_creation" in subnet_info'
- 'subnet_info.assign_ipv6_address_on_creation == False'
- '"availability_zone" in subnet_info'
- - 'subnet_info.availability_zone == subnet_az'
+ - 'subnet_info.availability_zone == availability_zone'
- '"available_ip_address_count" in subnet_info'
- '"cidr_block" in subnet_info'
- 'subnet_info.cidr_block == subnet_cidr'
@@ -117,7 +108,7 @@
- name: recreate subnet (expected changed=false) (CHECK MODE)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -134,7 +125,7 @@
- name: recreate subnet (expected changed=false)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -152,7 +143,7 @@
- name: update subnet so instances launched in it are assigned an IP (CHECK MODE)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -170,7 +161,7 @@
- name: update subnet so instances launched in it are assigned an IP
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -189,7 +180,7 @@
- name: add invalid ipv6 block to subnet (expected failed)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
ipv6_cidr: 2001:db8::/64
tags:
@@ -209,7 +200,7 @@
- name: add a tag (expected changed=true) (CHECK MODE)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -227,7 +218,7 @@
- name: add a tag (expected changed=true)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -262,7 +253,7 @@
- name: remove tags with default purge_tags=true (expected changed=true) (CHECK MODE)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
AnotherTag: SomeValue
@@ -278,7 +269,7 @@
- name: remove tags with default purge_tags=true (expected changed=true)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
AnotherTag: SomeValue
@@ -310,7 +301,7 @@
- name: change tags with purge_tags=false (expected changed=true) (CHECK MODE)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
@@ -328,7 +319,7 @@
- name: change tags with purge_tags=false (expected changed=true)
ec2_vpc_subnet:
cidr: "{{ subnet_cidr }}"
- az: "{{ subnet_az }}"
+ az: "{{ availability_zone }}"
vpc_id: "{{ vpc_result.vpc.id }}"
tags:
Name: '{{ec2_vpc_subnet_name}}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,4 @@
+# 20+ minutes
+slow
+
+cloud/aws
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,163 @@
+---
+# defaults file for ec2_elb_lb
+elb_name: 'ansible-test-{{ tiny_prefix }}'
+
+vpc_cidr: '10.{{ 256 | random(seed=resource_prefix) }}.0.0/16'
+subnet_cidr_1: '10.{{ 256 | random(seed=resource_prefix) }}.1.0/24'
+subnet_cidr_2: '10.{{ 256 | random(seed=resource_prefix) }}.2.0/24'
+subnet_cidr_3: '10.{{ 256 | random(seed=resource_prefix) }}.3.0/24'
+subnet_cidr_4: '10.{{ 256 | random(seed=resource_prefix) }}.4.0/24'
+
+default_tags:
+ snake_case_key: snake_case_value
+ camelCaseKey: camelCaseValue
+ PascalCaseKey: PascalCaseValue
+ "key with spaces": value with spaces
+ "Upper With Spaces": Upper With Spaces
+
+partial_tags:
+ snake_case_key: snake_case_value
+ camelCaseKey: camelCaseValue
+
+updated_tags:
+ updated_snake_case_key: updated_snake_case_value
+ updatedCamelCaseKey: updatedCamelCaseValue
+ UpdatedPascalCaseKey: UpdatedPascalCaseValue
+ "updated key with spaces": updated value with spaces
+ "updated Upper With Spaces": Updated Upper With Spaces
+
+default_listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 80
+ - protocol: http
+ load_balancer_port: 8080
+ instance_port: 8080
+ instance_protocol: http
+default_listener_tuples:
+ - [80, 80, "HTTP", "HTTP"]
+ - [8080, 8080, "HTTP", "HTTP"]
+
+purged_listeners:
+ - protocol: http
+ load_balancer_port: 8080
+ instance_port: 8080
+ instance_protocol: http
+purged_listener_tuples:
+ - [8080, 8080, "HTTP", "HTTP"]
+
+updated_listeners:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 8181
+ - protocol: http
+ load_balancer_port: 8080
+ instance_port: 8080
+ instance_protocol: http
+updated_listener_tuples:
+ - [80, 8181, "HTTP", "HTTP"]
+ - [8080, 8080, "HTTP", "HTTP"]
+
+unproxied_listener:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 8181
+ proxy_protocol: False
+unproxied_listener_tuples:
+ - [80, 8181, "HTTP", "HTTP"]
+
+proxied_listener:
+ - protocol: http
+ load_balancer_port: 80
+ instance_port: 8181
+ proxy_protocol: True
+proxied_listener_tuples:
+ - [80, 8181, "HTTP", "HTTP"]
+
+ssh_listeners:
+ - protocol: tcp
+ load_balancer_port: 22
+ instance_port: 22
+ instance_protocol: tcp
+ssh_listener_tuples:
+ - [22, 22, "TCP", "TCP"]
+
+default_health_check:
+ ping_protocol: http
+ ping_port: 80
+ ping_path: "/index.html"
+ response_timeout: 5
+ interval: 30
+ unhealthy_threshold: 2
+ healthy_threshold: 10
+default_health_check_target: "HTTP:80/index.html"
+
+updated_health_check:
+ ping_protocol: http
+ ping_port: 8181
+ ping_path: "/healthz"
+ response_timeout: 15
+ interval: 42
+ unhealthy_threshold: 7
+ healthy_threshold: 6
+updated_health_check_target: "HTTP:8181/healthz"
+
+nonhttp_health_check:
+ ping_protocol: tcp
+ ping_port: 8282
+ response_timeout: 16
+ interval: 43
+ unhealthy_threshold: 8
+ healthy_threshold: 2
+nonhttp_health_check_target: "TCP:8282"
+
+ssh_health_check:
+ ping_protocol: tcp
+ ping_port: 22
+ response_timeout: 5
+ interval: 10
+ unhealthy_threshold: 2
+ healthy_threshold: 2
+ssh_health_check_target: "TCP:22"
+
+default_idle_timeout: 25
+updated_idle_timeout: 50
+default_drain_timeout: 15
+updated_drain_timeout: 25
+
+app_stickiness:
+ type: application
+ cookie: MyCookie
+ enabled: true
+
+updated_app_stickiness:
+ type: application
+ cookie: AnotherCookie
+
+lb_stickiness:
+ type: loadbalancer
+
+updated_lb_stickiness:
+ type: loadbalancer
+ expiration: 600
+
+# Amazon's SDKs don't provide the list of account ID's. Amazon only provide a
+# web page. If you want to run the tests outside the US regions you'll need to
+# update this.
+# https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html
+access_log_account_id_map:
+ us-east-1: '127311923021'
+ us-east-2: '033677994240'
+ us-west-1: '027434742980'
+ us-west-2: '797873946194'
+ us-gov-west-1: '048591011584'
+ us-gov-east-1: '190560391635'
+
+access_log_account_id: '{{ access_log_account_id_map[aws_region] }}'
+
+s3_logging_bucket_a: 'ansible-test-{{ tiny_prefix }}-a'
+s3_logging_bucket_b: 'ansible-test-{{ tiny_prefix }}-b'
+default_logging_prefix: 'logs'
+updated_logging_prefix: 'mylogs'
+default_logging_interval: 5
+updated_logging_interval: 60
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/meta/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+dependencies:
+ - prepare_tests
+ - setup_ec2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_internal.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_internal.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_internal.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_internal.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,292 @@
+---
+- block:
+ # For creation test some basic behaviour
+ - module_defaults:
+ elb_classic_lb:
+ # zones: ['{{ availability_zone_a }}', '{{ availability_zone_b }}']
+ listeners: '{{ default_listeners }}'
+ wait: true
+ scheme: 'internal'
+ subnets: ['{{ subnet_a }}', '{{ subnet_b }}']
+ block:
+ # ============================================================
+ # create test elb with listeners, certificate, and health check
+
+ - name: Create internal ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "created"
+
+ - name: Create ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "created"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+ - subnet_a in result.elb.subnets
+ - subnet_b in result.elb.subnets
+
+ - name: Create internal ELB idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+
+ - name: Create internal ELB idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+ - subnet_a in result.elb.subnets
+ - subnet_b in result.elb.subnets
+
+ - ec2_eni_info:
+ filters:
+ description: 'ELB {{ elb_name }}'
+ register: info
+
+ - assert:
+ that:
+ - info.network_interfaces | length > 0
+
+ - elb_classic_lb_info:
+ names: ['{{ elb_name }}']
+ register: info
+
+ - assert:
+ that:
+ - info.elbs | length > 0
+
+ # ============================================================
+ # Now we're outside of the creation we drop the defaults
+ # ============================================================
+
+ - name: Add a subnet - no purge (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}']
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+
+ - name: Add a subnet - no purge
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}']
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+ - availability_zone_c in result.elb.zones
+ - subnet_a in result.elb.subnets
+ - subnet_b in result.elb.subnets
+ - subnet_c in result.elb.subnets
+
+ - name: Add a subnet - no purge - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}']
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+
+ - name: Add a subnet - no purge - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}']
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+ - availability_zone_c in result.elb.zones
+ - subnet_a in result.elb.subnets
+ - subnet_b in result.elb.subnets
+ - subnet_c in result.elb.subnets
+
+ # While purging try adding a subnet from the same AZ as one we're purging.
+ # This is important because you can't add 2 AZs to an LB from the same AZ at
+ # the same time.
+ - name: Add a subnet - purge (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}', '{{ subnet_a2 }}']
+ purge_subnets: true
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+
+ - name: Add a subnet - purge
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}', '{{ subnet_a2 }}']
+ purge_subnets: true
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b not in result.elb.zones
+ - availability_zone_c in result.elb.zones
+ - subnet_a not in result.elb.subnets
+ - subnet_b not in result.elb.subnets
+ - subnet_c in result.elb.subnets
+ - subnet_a2 in result.elb.subnets
+
+ - name: Add a subnet - purge - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}', '{{ subnet_a2 }}']
+ purge_subnets: true
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+
+ - name: Add a subnet - purge - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ subnets: ['{{ subnet_c }}', '{{ subnet_a2 }}']
+ purge_subnets: true
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b not in result.elb.zones
+ - availability_zone_c in result.elb.zones
+ - subnet_a not in result.elb.subnets
+ - subnet_b not in result.elb.subnets
+ - subnet_c in result.elb.subnets
+ - subnet_a2 in result.elb.subnets
+
+ # ============================================================
+
+ - name: remove the test load balancer completely (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ check_mode: true
+
+ - name: assert the load balancer would be removed
+ assert:
+ that:
+ - result is changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "deleted"'
+
+ - name: remove the test load balancer completely
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+
+ - name: assert the load balancer was removed
+ assert:
+ that:
+ - result is changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "deleted"'
+
+ - name: remove the test load balancer completely (idempotency) (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ check_mode: true
+
+ - name: assert the load balancer is gone
+ assert:
+ that:
+ - result is not changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "gone"'
+
+ - name: remove the test load balancer completely (idempotency)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+
+ - name: assert the load balancer is gone
+ assert:
+ that:
+ - result is not changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "gone"'
+
+ always:
+
+ # ============================================================
+ - name: remove the test load balancer
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_public.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_public.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_public.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/basic_public.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,273 @@
+---
+- block:
+ # For creation test some basic behaviour
+ - module_defaults:
+ elb_classic_lb:
+ zones: ['{{ availability_zone_a }}', '{{ availability_zone_b }}']
+ listeners: '{{ default_listeners }}'
+ wait: true
+ scheme: 'internet-facing'
+ # subnets: ['{{ subnet_a }}', '{{ subnet_b }}']
+ block:
+ # ============================================================
+ # create test elb with listeners, certificate, and health check
+
+ - name: Create public ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "created"
+
+ - name: Create public ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "created"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+
+ - name: Create public ELB idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+
+ - name: Create public ELB idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+
+ - ec2_eni_info:
+ filters:
+ description: 'ELB {{ elb_name }}'
+ register: info
+
+ - assert:
+ that:
+ - info.network_interfaces | length > 0
+
+ - elb_classic_lb_info:
+ names: ['{{ elb_name }}']
+ register: info
+
+ - assert:
+ that:
+ - info.elbs | length > 0
+
+ # ============================================================
+ # Now we're outside of the creation we drop the defaults
+ # ============================================================
+
+ - name: Add a zone - no purge (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+
+ - name: Add a zone - no purge
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+ - availability_zone_c in result.elb.zones
+
+ - name: Add a zone - no purge - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+
+ - name: Add a zone - no purge - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+ - availability_zone_c in result.elb.zones
+
+ # ============================================================
+
+ - name: Remove a zone - purge (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ purge_zones: true
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+
+ - name: Remove a zone - purge
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ purge_zones: true
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == "exists"
+ - availability_zone_a not in result.elb.zones
+ - availability_zone_b not in result.elb.zones
+ - availability_zone_c in result.elb.zones
+
+ - name: Remove a zone - purge - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ purge_zones: true
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+
+ - name: Remove a zone - purge - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ zones: ['{{ availability_zone_c }}']
+ purge_zones: true
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.status == "exists"
+ - availability_zone_a not in result.elb.zones
+ - availability_zone_b not in result.elb.zones
+ - availability_zone_c in result.elb.zones
+
+ # ============================================================
+
+ - name: remove the test load balancer completely (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ check_mode: true
+
+ - name: assert the load balancer would be removed
+ assert:
+ that:
+ - result is changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "deleted"'
+
+ - name: remove the test load balancer completely
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+
+ - name: assert the load balancer was removed
+ assert:
+ that:
+ - result is changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "deleted"'
+
+ - name: remove the test load balancer completely (idempotency) (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ check_mode: true
+
+ - name: assert the load balancer is gone
+ assert:
+ that:
+ - result is not changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "gone"'
+
+ - name: remove the test load balancer completely (idempotency)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+
+ - name: assert the load balancer is gone
+ assert:
+ that:
+ - result is not changed
+ - 'result.elb.name == "{{ elb_name }}"'
+ - 'result.elb.status == "gone"'
+
+ always:
+
+ # ============================================================
+ - name: remove the test load balancer
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_instances.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_instances.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_instances.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_instances.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,9 @@
+---
+- name: Delete instance
+ ec2_instance:
+ instance_ids:
+ - '{{ instance_a }}'
+ - '{{ instance_b }}'
+ state: absent
+ wait: true
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_s3.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_s3.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_s3.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_s3.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,32 @@
+---
+- name: Create empty temporary directory
+ tempfile:
+ state: directory
+ register: tmpdir
+ ignore_errors: true
+
+- name: Empty S3 buckets before deletion
+ s3_sync:
+ bucket: '{{ item }}'
+ delete: true
+ file_root: '{{ tmpdir.path }}'
+ ignore_errors: true
+ loop:
+ - '{{ s3_logging_bucket_a }}'
+ - '{{ s3_logging_bucket_b }}'
+
+- name: Delete S3 bucket for access logs
+ s3_bucket:
+ name: '{{ item }}'
+ state: absent
+ register: logging_bucket
+ ignore_errors: true
+ loop:
+ - '{{ s3_logging_bucket_a }}'
+ - '{{ s3_logging_bucket_b }}'
+
+- name: Remove temporary directory
+ file:
+ state: absent
+ path: "{{ tmpdir.path }}"
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_vpc.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_vpc.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_vpc.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/cleanup_vpc.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,29 @@
+---
+- name: delete security groups
+ ec2_group:
+ name: '{{ item }}'
+ state: absent
+ ignore_errors: true
+ loop:
+ - '{{ resource_prefix }}-a'
+ - '{{ resource_prefix }}-b'
+ - '{{ resource_prefix }}-c'
+
+- name: delete subnets
+ ec2_vpc_subnet:
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ cidr: '{{ item }}'
+ state: absent
+ ignore_errors: true
+ loop:
+ - '{{ subnet_cidr_1 }}'
+ - '{{ subnet_cidr_2 }}'
+ - '{{ subnet_cidr_3 }}'
+ - '{{ subnet_cidr_4 }}'
+
+- name: delete VPC
+ ec2_vpc_net:
+ cidr_block: '{{ vpc_cidr }}'
+ state: absent
+ name: '{{ resource_prefix }}'
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/describe_region.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/describe_region.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/describe_region.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/describe_region.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,10 @@
+---
+- name: list available AZs
+ aws_az_info:
+ register: region_azs
+
+- name: pick AZs for testing
+ set_fact:
+ availability_zone_a: "{{ region_azs.availability_zones[0].zone_name }}"
+ availability_zone_b: "{{ region_azs.availability_zones[1].zone_name }}"
+ availability_zone_c: "{{ region_azs.availability_zones[2].zone_name }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,54 @@
+---
+# __Test Info__
+# Create a self signed cert and upload it to AWS
+# http://www.akadia.com/services/ssh_test_certificate.html
+# http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html
+
+# __Test Outline__
+#
+# __elb_classic_lb__
+# create test elb with listeners and certificate
+# change AZ's
+# change listeners
+# remove listeners
+# remove elb
+
+- module_defaults:
+ group/aws:
+ region: "{{ aws_region }}"
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ security_token: "{{ security_token | default(omit) }}"
+ collections:
+ - community.aws
+ - amazon.aws
+ block:
+
+ - include_tasks: missing_params.yml
+
+ - include_tasks: describe_region.yml
+ - include_tasks: setup_vpc.yml
+ - include_tasks: setup_instances.yml
+ - include_tasks: setup_s3.yml
+
+ - include_tasks: basic_public.yml
+ - include_tasks: basic_internal.yml
+ - include_tasks: schema_change.yml
+
+ - include_tasks: simple_changes.yml
+
+ always:
+
+ # ============================================================
+ # ELB should already be gone, but double-check
+ - name: remove the test load balancer
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ ignore_errors: true
+
+ - include_tasks: cleanup_s3.yml
+ - include_tasks: cleanup_instances.yml
+ - include_tasks: cleanup_vpc.yml
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/missing_params.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/missing_params.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/missing_params.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/missing_params.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,203 @@
+---
+# Test behaviour when mandatory params aren't passed
+- block:
+ # ============================================================
+
+ - name: test with no name
+ elb_classic_lb:
+ state: present
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when called with no parameters
+ assert:
+ that:
+ - 'result.failed'
+ - '"missing required arguments" in result.msg'
+ - '"name" in result.msg'
+
+ - name: test with only name (state missing)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when called with only name
+ assert:
+ that:
+ - 'result.failed'
+ - '"missing required arguments" in result.msg'
+ - '"state" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ listeners:
+ - load_balancer_port: 80
+ instance_port: 80
+ protocol: http
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when neither subnets nor AZs are provided on creation
+ assert:
+ that:
+ - 'result.failed'
+ - '"subnets" in result.msg'
+ - '"zones" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ subnets: ['subnet-123456789']
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when listeners not provided on creation
+ assert:
+ that:
+ - 'result.failed'
+ - '"listeners" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ subnets: ['subnet-123456789']
+ listeners:
+ - load_balancer_port: 80
+ instance_port: 80
+ protocol: junk
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when listeners contains invalid protocol
+ assert:
+ that:
+ - 'result.failed'
+ - '"protocol" in result.msg'
+ - '"junk" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ subnets: ['subnet-123456789']
+ listeners:
+ - load_balancer_port: 80
+ instance_port: 80
+ protocol: http
+ instance_protocol: junk
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when listeners contains invalid instance_protocol
+ assert:
+ that:
+ - 'result.failed'
+ - '"protocol" in result.msg'
+ - '"junk" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ subnets: ['subnet-123456789']
+ listeners:
+ - load_balancer_port: 80
+ instance_port: 80
+ protocol: http
+ health_check:
+ ping_protocol: junk
+ ping_port: 80
+ interval: 5
+ timeout: 5
+ unhealthy_threshold: 5
+ healthy_threshold: 5
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when healthcheck ping_protocol is invalid
+ assert:
+ that:
+ - 'result.failed'
+ - '"protocol" in result.msg'
+ - '"junk" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ subnets: ['subnet-123456789']
+ listeners:
+ - load_balancer_port: 80
+ instance_port: 80
+ protocol: http
+ health_check:
+ ping_protocol: http
+ ping_port: 80
+ interval: 5
+ timeout: 5
+ unhealthy_threshold: 5
+ healthy_threshold: 5
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when HTTP healthcheck missing a ping_path
+ assert:
+ that:
+ - 'result.failed'
+ - '"ping_path" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ subnets: ['subnet-123456789']
+ listeners:
+ - load_balancer_port: 80
+ instance_port: 80
+ protocol: http
+ stickiness:
+ type: application
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when app stickiness policy missing cookie name
+ assert:
+ that:
+ - 'result.failed'
+ - '"cookie" in result.msg'
+
+ - elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ scheme: 'internal'
+ subnets: ['subnet-123456789']
+ listeners:
+ - load_balancer_port: 80
+ instance_port: 80
+ protocol: http
+ access_logs:
+ interval: 60
+ register: result
+ ignore_errors: true
+
+ - name: assert failure when access log is missing a bucket
+ assert:
+ that:
+ - 'result.failed'
+ - '"s3_location" in result.msg'
+
+ always:
+
+ # ============================================================
+ - name: remove the test load balancer
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/schema_change.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/schema_change.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/schema_change.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/schema_change.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,189 @@
+---
+- block:
+ # For creation test some basic behaviour
+ - module_defaults:
+ elb_classic_lb:
+ zones: ['{{ availability_zone_a }}', '{{ availability_zone_b }}']
+ listeners: '{{ default_listeners }}'
+ wait: true
+ scheme: 'internet-facing'
+ # subnets: ['{{ subnet_a }}', '{{ subnet_b }}']
+ block:
+ - name: Create ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.status == 'created'
+ - result.elb.scheme == 'internet-facing'
+
+ - module_defaults:
+ elb_classic_lb:
+ # zones: ['{{ availability_zone_a }}', '{{ availability_zone_b }}']
+ listeners: '{{ default_listeners }}'
+ wait: true
+ scheme: 'internal'
+ subnets: ['{{ subnet_a }}', '{{ subnet_b }}']
+ block:
+
+ - name: Change Schema to internal (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+
+ - name: Change Schema to internal
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.scheme == 'internal'
+
+ - name: Change Schema to internal idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+
+ - name: Change Schema to internal idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.scheme == 'internal'
+
+ - name: No schema specified (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ schema: '{{ omit }}'
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+
+ - name: No schema specified
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ schema: '{{ omit }}'
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.scheme == 'internal'
+
+ # For creation test some basic behaviour
+ - module_defaults:
+ elb_classic_lb:
+ zones: ['{{ availability_zone_a }}', '{{ availability_zone_b }}']
+ listeners: '{{ default_listeners }}'
+ health_check: '{{ default_health_check }}'
+ wait: true
+ scheme: 'internet-facing'
+ # subnets: ['{{ subnet_a }}', '{{ subnet_b }}']
+ block:
+
+ - name: Change schema to internet-facing (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is changed
+
+ - name: Change schema to internet-facing
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is changed
+ - result.elb.scheme == 'internet-facing'
+
+ - name: Change schema to internet-facing idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+
+ - name: Change schema to internet-facing idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.scheme == 'internet-facing'
+
+ - name: No schema specified (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ schema: '{{ omit }}'
+ register: result
+ check_mode: true
+
+ - assert:
+ that:
+ - result is not changed
+
+ - name: No schema specified
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ schema: '{{ omit }}'
+ register: result
+
+ - assert:
+ that:
+ - result is not changed
+ - result.elb.scheme == 'internet-facing'
+
+ always:
+
+ # ============================================================
+ - name: remove the test load balancer
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_instances.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_instances.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_instances.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_instances.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,33 @@
+---
+- name: Get a list of images
+ ec2_ami_info:
+ filters:
+ owner-alias: amazon
+ name: "amzn2-ami-minimal-hvm-*"
+ description: "Amazon Linux 2 AMI *"
+ register: images_info
+
+- name: Create instance a
+ ec2_instance:
+ name: "ansible-test-{{ tiny_prefix }}-elb-a"
+ image_id: "{{ images_info.images | sort(attribute='creation_date') | reverse | first | json_query('image_id') }}"
+ vpc_subnet_id: "{{ subnet_a }}"
+ instance_type: t2.micro
+ wait: false
+ security_group: "{{ sg_a }}"
+ register: ec2_instance_a
+
+- name: Create instance b
+ ec2_instance:
+ name: "ansible-test-{{ tiny_prefix }}-elb-b"
+ image_id: "{{ images_info.images | sort(attribute='creation_date') | reverse | first | json_query('image_id') }}"
+ vpc_subnet_id: "{{ subnet_b }}"
+ instance_type: t2.micro
+ wait: false
+ security_group: "{{ sg_b }}"
+ register: ec2_instance_b
+
+- name: store the Instance IDs
+ set_fact:
+ instance_a: "{{ ec2_instance_a.instance_ids[0] }}"
+ instance_b: "{{ ec2_instance_b.instance_ids[0] }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_s3.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_s3.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_s3.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_s3.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,26 @@
+---
+- name: Create S3 bucket for access logs
+ vars:
+ s3_logging_bucket: '{{ s3_logging_bucket_a }}'
+ s3_bucket:
+ name: '{{ s3_logging_bucket_a }}'
+ state: present
+ policy: "{{ lookup('template','s3_policy.j2') }}"
+ register: logging_bucket
+
+- assert:
+ that:
+ - logging_bucket is changed
+
+- name: Create S3 bucket for access logs
+ vars:
+ s3_logging_bucket: '{{ s3_logging_bucket_b }}'
+ s3_bucket:
+ name: '{{ s3_logging_bucket_b }}'
+ state: present
+ policy: "{{ lookup('template','s3_policy.j2') }}"
+ register: logging_bucket
+
+- assert:
+ that:
+ - logging_bucket is changed
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_vpc.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_vpc.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_vpc.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/setup_vpc.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,103 @@
+---
+# SETUP: vpc, subnet, security group
+- name: create a VPC to work in
+ ec2_vpc_net:
+ cidr_block: '{{ vpc_cidr }}'
+ state: present
+ name: '{{ resource_prefix }}'
+ resource_tags:
+ Name: '{{ resource_prefix }}'
+ register: setup_vpc
+
+- name: create a subnet
+ ec2_vpc_subnet:
+ az: '{{ availability_zone_a }}'
+ tags: '{{ resource_prefix }}'
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ cidr: '{{ subnet_cidr_1 }}'
+ state: present
+ resource_tags:
+ Name: '{{ resource_prefix }}-a'
+ register: setup_subnet_1
+
+- name: create a subnet
+ ec2_vpc_subnet:
+ az: '{{ availability_zone_b }}'
+ tags: '{{ resource_prefix }}'
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ cidr: '{{ subnet_cidr_2 }}'
+ state: present
+ resource_tags:
+ Name: '{{ resource_prefix }}-b'
+ register: setup_subnet_2
+
+- name: create a subnet
+ ec2_vpc_subnet:
+ az: '{{ availability_zone_c }}'
+ tags: '{{ resource_prefix }}'
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ cidr: '{{ subnet_cidr_3 }}'
+ state: present
+ resource_tags:
+ Name: '{{ resource_prefix }}-c'
+ register: setup_subnet_3
+
+- name: create a subnet
+ ec2_vpc_subnet:
+ az: '{{ availability_zone_a }}'
+ tags: '{{ resource_prefix }}'
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ cidr: '{{ subnet_cidr_4 }}'
+ state: present
+ resource_tags:
+ Name: '{{ resource_prefix }}-a2'
+ register: setup_subnet_4
+
+- name: create a security group
+ ec2_group:
+ name: '{{ resource_prefix }}-a'
+ description: 'created by Ansible integration tests'
+ state: present
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ rules:
+ - proto: tcp
+ from_port: 22
+ to_port: 22
+ cidr_ip: '{{ vpc_cidr }}'
+ register: setup_sg_1
+
+- name: create a security group
+ ec2_group:
+ name: '{{ resource_prefix }}-b'
+ description: 'created by Ansible integration tests'
+ state: present
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ rules:
+ - proto: tcp
+ from_port: 22
+ to_port: 22
+ cidr_ip: '{{ vpc_cidr }}'
+ register: setup_sg_2
+
+- name: create a security group
+ ec2_group:
+ name: '{{ resource_prefix }}-c'
+ description: 'created by Ansible integration tests'
+ state: present
+ vpc_id: '{{ setup_vpc.vpc.id }}'
+ rules:
+ - proto: tcp
+ from_port: 22
+ to_port: 22
+ cidr_ip: '{{ vpc_cidr }}'
+ register: setup_sg_3
+
+- name: store the IDs
+ set_fact:
+ subnet_a: "{{ setup_subnet_1.subnet.id }}"
+ subnet_b: "{{ setup_subnet_2.subnet.id }}"
+ subnet_c: "{{ setup_subnet_3.subnet.id }}"
+ subnet_a2: "{{ setup_subnet_4.subnet.id }}"
+ sg_a: "{{ setup_sg_1.group_id }}"
+ sg_b: "{{ setup_sg_2.group_id }}"
+ sg_c: "{{ setup_sg_3.group_id }}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_changes.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_changes.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_changes.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_changes.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,79 @@
+---
+- block:
+ ## Setup an ELB for testing changing one thing at a time
+ - name: Create ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ # zones: ['{{ availability_zone_a }}', '{{ availability_zone_b }}']
+ listeners: '{{ default_listeners }}'
+ health_check: '{{ default_health_check }}'
+ wait: true
+ scheme: 'internal'
+ subnets: ['{{ subnet_a }}', '{{ subnet_b }}']
+ security_group_ids: ['{{ sg_a }}']
+ tags: '{{ default_tags }}'
+ cross_az_load_balancing: True
+ idle_timeout: '{{ default_idle_timeout }}'
+ connection_draining_timeout: '{{ default_drain_timeout }}'
+ access_logs:
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ enabled: true
+ register: result
+
+ - name: Verify that simple parameters were set
+ assert:
+ that:
+ - result is changed
+ - result.elb.status == "created"
+ - availability_zone_a in result.elb.zones
+ - availability_zone_b in result.elb.zones
+ - subnet_a in result.elb.subnets
+ - subnet_b in result.elb.subnets
+ - default_listener_tuples[0] in result.elb.listeners
+ - default_listener_tuples[1] in result.elb.listeners
+ - sg_a in result.elb.security_group_ids
+ - sg_b not in result.elb.security_group_ids
+ - sg_c not in result.elb.security_group_ids
+ - result.elb.health_check.healthy_threshold == default_health_check['healthy_threshold']
+ - result.elb.health_check.interval == default_health_check['interval']
+ - result.elb.health_check.target == default_health_check_target
+ - result.elb.health_check.timeout == default_health_check['response_timeout']
+ - result.elb.health_check.unhealthy_threshold == default_health_check['unhealthy_threshold']
+ - result.elb.tags == default_tags
+ - result.elb.cross_az_load_balancing == 'yes'
+ - result.elb.idle_timeout == default_idle_timeout
+ - result.elb.connection_draining_timeout == default_drain_timeout
+ - result.elb.proxy_policy == None
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == default_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_a
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+ ## AZ / Subnet changes are tested in wth the public/internal tests
+ ## because they depend on the scheme of the LB
+
+ - include_tasks: 'simple_securitygroups.yml'
+ - include_tasks: 'simple_listeners.yml'
+ - include_tasks: 'simple_healthcheck.yml'
+ - include_tasks: 'simple_tags.yml'
+ - include_tasks: 'simple_cross_az.yml'
+ - include_tasks: 'simple_idle_timeout.yml'
+ - include_tasks: 'simple_draining_timeout.yml'
+ - include_tasks: 'simple_proxy_policy.yml'
+ - include_tasks: 'simple_stickiness.yml'
+ - include_tasks: 'simple_instances.yml'
+ - include_tasks: 'simple_logging.yml'
+
+ always:
+
+ # ============================================================
+ - name: remove the test load balancer
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: absent
+ wait: true
+ register: result
+ ignore_errors: true
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_cross_az.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_cross_az.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_cross_az.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_cross_az.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,100 @@
+---
+# ===========================================================
+
+- name: disable cross-az balancing on ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: False
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: disable cross-az balancing on ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: False
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.cross_az_load_balancing == 'no'
+
+- name: disable cross-az balancing on ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: False
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: disable cross-az balancing on ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: False
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.cross_az_load_balancing == 'no'
+
+# ===========================================================
+
+- name: re-enable cross-az balancing on ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: True
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: re-enable cross-az balancing on ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: True
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.cross_az_load_balancing == 'yes'
+
+- name: re-enable cross-az balancing on ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: True
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: re-enable cross-az balancing on ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ cross_az_load_balancing: True
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.cross_az_load_balancing == 'yes'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_draining_timeout.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_draining_timeout.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_draining_timeout.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_draining_timeout.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,148 @@
+---
+# ===========================================================
+
+- name: disable connection draining on ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: 0
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: disable connection draining on ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: 0
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: disable connection draining on ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: 0
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: disable connection draining on ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: 0
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+# ===========================================================
+
+- name: re-enable connection draining on ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ default_drain_timeout }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: re-enable connection draining on ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ default_drain_timeout }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.connection_draining_timeout == default_drain_timeout
+
+- name: re-enable connection draining on ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ default_drain_timeout }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: re-enable connection draining on ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ default_drain_timeout }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.connection_draining_timeout == default_drain_timeout
+
+# ===========================================================
+
+- name: update connection draining timout on ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ updated_drain_timeout }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: update connection draining timout on ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ updated_drain_timeout }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.connection_draining_timeout == updated_drain_timeout
+
+- name: update connection draining timout on ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ updated_drain_timeout }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: update connection draining timout on ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ connection_draining_timeout: '{{ updated_drain_timeout }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.connection_draining_timeout == updated_drain_timeout
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_healthcheck.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_healthcheck.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_healthcheck.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_healthcheck.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,116 @@
+---
+# Note: AWS doesn't support disabling health checks
+# ==============================================================
+- name: Non-HTTP Healthcheck (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ nonhttp_health_check }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Non-HTTP Healthcheck
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ nonhttp_health_check }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.health_check.healthy_threshold == nonhttp_health_check['healthy_threshold']
+ - result.elb.health_check.interval == nonhttp_health_check['interval']
+ - result.elb.health_check.target == nonhttp_health_check_target
+ - result.elb.health_check.timeout == nonhttp_health_check['response_timeout']
+ - result.elb.health_check.unhealthy_threshold == nonhttp_health_check['unhealthy_threshold']
+
+- name: Non-HTTP Healthcheck - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ nonhttp_health_check }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Non-HTTP Healthcheck - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ nonhttp_health_check }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.health_check.healthy_threshold == nonhttp_health_check['healthy_threshold']
+ - result.elb.health_check.interval == nonhttp_health_check['interval']
+ - result.elb.health_check.target == nonhttp_health_check_target
+ - result.elb.health_check.timeout == nonhttp_health_check['response_timeout']
+ - result.elb.health_check.unhealthy_threshold == nonhttp_health_check['unhealthy_threshold']
+
+# ==============================================================
+
+- name: Update Healthcheck (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ updated_health_check }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update Healthcheck
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ updated_health_check }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.health_check.healthy_threshold == updated_health_check['healthy_threshold']
+ - result.elb.health_check.interval == updated_health_check['interval']
+ - result.elb.health_check.target == updated_health_check_target
+ - result.elb.health_check.timeout == updated_health_check['response_timeout']
+ - result.elb.health_check.unhealthy_threshold == updated_health_check['unhealthy_threshold']
+
+- name: Update Healthcheck - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ updated_health_check }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Update Healthcheck - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ health_check: '{{ updated_health_check }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.health_check.healthy_threshold == updated_health_check['healthy_threshold']
+ - result.elb.health_check.interval == updated_health_check['interval']
+ - result.elb.health_check.target == updated_health_check_target
+ - result.elb.health_check.timeout == updated_health_check['response_timeout']
+ - result.elb.health_check.unhealthy_threshold == updated_health_check['unhealthy_threshold']
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_idle_timeout.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_idle_timeout.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_idle_timeout.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_idle_timeout.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,50 @@
+---
+# ===========================================================
+
+- name: update idle connection timeout on ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ idle_timeout: "{{ updated_idle_timeout }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: update idle connection timeout on ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ idle_timeout: "{{ updated_idle_timeout }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.idle_timeout == updated_idle_timeout
+
+- name: update idle connection timeout on ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ idle_timeout: "{{ updated_idle_timeout }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: update idle connection timeout on ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ idle_timeout: "{{ updated_idle_timeout }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.idle_timeout == updated_idle_timeout
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_instances.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_instances.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_instances.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_instances.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,411 @@
+---
+- name: Add SSH listener and health check to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ ssh_listeners }}"
+ health_check: "{{ ssh_health_check }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - ssh_listener_tuples[0] in result.elb.listeners
+
+# Make sure that the instances are 'OK'
+
+- name: Wait for instance a
+ ec2_instance:
+ name: "ansible-test-{{ tiny_prefix }}-elb-a"
+ vpc_subnet_id: "{{ subnet_a }}"
+ instance_type: t2.micro
+ wait: true
+ security_group: "{{ sg_a }}"
+ register: ec2_instance_a
+
+- name: Wait for instance b
+ ec2_instance:
+ name: "ansible-test-{{ tiny_prefix }}-elb-b"
+ vpc_subnet_id: "{{ subnet_b }}"
+ instance_type: t2.micro
+ wait: true
+ security_group: "{{ sg_b }}"
+ register: ec2_instance_b
+
+- assert:
+ that:
+ - ec2_instance_a is successful
+ - ec2_instance_b is successful
+
+# ==============================================================
+
+- name: Add an instance to the LB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ wait: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Add an instance to the LB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ wait: true
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - instance_a in result.elb.instances
+ - instance_b not in result.elb.instances
+
+- name: Add an instance to the LB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ wait: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Add an instance to the LB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ wait: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - instance_a in result.elb.instances
+ - instance_b not in result.elb.instances
+
+# ==============================================================
+
+- name: Add second instance to the LB without purge (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Add second instance to the LB without purge
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - instance_a in result.elb.instances
+ - instance_b in result.elb.instances
+
+- name: Add second instance to the LB without purge - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Add second instance to the LB without purge - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - instance_a in result.elb.instances
+ - instance_b in result.elb.instances
+
+# ==============================================================
+
+- name: Both instances with purge - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Both instances with purge - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - instance_a in result.elb.instances
+ - instance_b in result.elb.instances
+
+- name: Both instances with purge - different order - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ - '{{ instance_a }}'
+ purge_instance_ids: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Both instances with purge - different order - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ - '{{ instance_a }}'
+ purge_instance_ids: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - instance_a in result.elb.instances
+ - instance_b in result.elb.instances
+
+# ==============================================================
+
+- name: Remove first instance from LB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Remove first instance from LB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - instance_a not in result.elb.instances
+ - instance_b in result.elb.instances
+
+- name: Remove first instance from LB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Remove first instance from LB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - instance_a not in result.elb.instances
+ - instance_b in result.elb.instances
+
+# ==============================================================
+
+- name: Switch instances in LB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Switch instances in LB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - instance_a in result.elb.instances
+ - instance_b not in result.elb.instances
+
+- name: Switch instances in LB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Switch instances in LB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_a }}'
+ purge_instance_ids: true
+ wait: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - instance_a in result.elb.instances
+ - instance_b not in result.elb.instances
+
+# ==============================================================
+
+- name: Switch instances in LB - no wait (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Switch instances in LB - no wait
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - instance_a not in result.elb.instances
+ - instance_b in result.elb.instances
+
+- name: Switch instances in LB - no wait - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Switch instances in LB - no wait - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ instance_ids:
+ - '{{ instance_b }}'
+ purge_instance_ids: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - instance_a not in result.elb.instances
+ - instance_b in result.elb.instances
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_listeners.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_listeners.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_listeners.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_listeners.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,196 @@
+---
+# ===========================================================
+# remove a listener (no purge)
+# remove a listener (purge)
+# add a listener
+# update a listener (same port)
+# ===========================================================
+# Test passing only one of the listeners
+# Without purge
+- name: Test partial Listener to ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ purged_listeners }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Test partial Listener to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ purged_listeners }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - default_listener_tuples[0] in result.elb.listeners
+ - default_listener_tuples[1] in result.elb.listeners
+
+# With purge
+- name: Test partial Listener with purge to ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ purged_listeners }}"
+ purge_listeners: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Test partial Listener with purge to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ purged_listeners }}"
+ purge_listeners: true
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - purged_listener_tuples[0] in result.elb.listeners
+
+- name: Test partial Listener with purge to ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ purged_listeners }}"
+ purge_listeners: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Test partial Listener with purge to ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ purged_listeners }}"
+ purge_listeners: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - purged_listener_tuples[0] in result.elb.listeners
+
+# ===========================================================
+# Test re-adding a listener
+- name: Test re-adding listener to ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ default_listeners }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Test re-adding listener to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ default_listeners }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - default_listener_tuples[0] in result.elb.listeners
+ - default_listener_tuples[1] in result.elb.listeners
+
+- name: Test re-adding listener to ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ default_listeners }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Test re-adding listener to ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ default_listeners }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - default_listener_tuples[0] in result.elb.listeners
+ - default_listener_tuples[1] in result.elb.listeners
+
+# ===========================================================
+# Test passing an updated listener
+- name: Test updated listener to ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ updated_listeners }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Test updated listener to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ updated_listeners }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - updated_listener_tuples[0] in result.elb.listeners
+ - updated_listener_tuples[1] in result.elb.listeners
+
+- name: Test updated listener to ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ updated_listeners }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Test updated listener to ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ updated_listeners }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - updated_listener_tuples[0] in result.elb.listeners
+ - updated_listener_tuples[1] in result.elb.listeners
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_logging.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_logging.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_logging.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_logging.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,587 @@
+---
+# ===========================================================
+
+- name: S3 logging for ELB - implied enabled (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: S3 logging for ELB - implied enabled
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == default_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_a
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+# ===========================================================
+
+- name: Disable S3 logging for ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Disable S3 logging for ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == False
+
+- name: Disable S3 logging for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Disable S3 logging for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == False
+
+# ===========================================================
+
+- name: Disable S3 logging for ELB - ignore extras (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ updated_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Disable S3 logging for ELB - ignore extras
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == False
+
+- name: Disable S3 logging for ELB - no extras (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Disable S3 logging for ELB - no extras
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == False
+
+# ===========================================================
+
+- name: Re-enable S3 logging for ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Re-enable S3 logging for ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == default_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_a
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+- name: Re-enable S3 logging for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Re-enable S3 logging for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ default_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == default_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_a
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+# ===========================================================
+
+- name: Update ELB Log delivery interval for ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update ELB Log delivery interval for ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_a
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+- name: Update ELB Log delivery interval for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Update ELB Log delivery interval for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_a }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_a
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+# ===========================================================
+
+- name: Update S3 Logging Location for ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update S3 Logging Location for ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+- name: Update S3 Logging Location for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Update S3 Logging Location for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ default_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == default_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+# ===========================================================
+
+- name: Update S3 Logging Prefix for ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ updated_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update S3 Logging Prefix for ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ updated_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == updated_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+- name: Update S3 Logging Prefix for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ updated_logging_prefix }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Update S3 Logging Prefix for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: '{{ updated_logging_prefix }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == updated_logging_prefix
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+# ===========================================================
+
+- name: Empty S3 Logging Prefix for ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Empty S3 Logging Prefix for ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == ''
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+- name: Empty S3 Logging Prefix for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Empty S3 Logging Prefix for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_location: '{{ s3_logging_bucket_b }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == ''
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+- name: Empty string S3 Logging Prefix for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_prefix: ''
+ s3_location: '{{ s3_logging_bucket_b }}'
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Empty stringS3 Logging Prefix for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ interval: '{{ updated_logging_interval }}'
+ s3_prefix: ''
+ s3_location: '{{ s3_logging_bucket_b }}'
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == updated_logging_interval
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == ''
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
+
+# ===========================================================
+
+- name: Update S3 Logging interval for ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: ''
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Update S3 Logging interval for ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ access_logs:
+ enabled: true
+ s3_location: '{{ s3_logging_bucket_b }}'
+ s3_prefix: ''
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.load_balancer_attributes.access_log.emit_interval == 60
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_name == s3_logging_bucket_b
+ - result.load_balancer.load_balancer_attributes.access_log.s3_bucket_prefix == ''
+ - result.load_balancer.load_balancer_attributes.access_log.enabled == True
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_proxy_policy.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_proxy_policy.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_proxy_policy.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_proxy_policy.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,141 @@
+---
+# ===========================================================
+- name: Enable proxy protocol on a listener (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ proxied_listener }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Enable proxy protocol on a listener
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ proxied_listener }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.proxy_policy == "ProxyProtocol-policy"
+ - result.load_balancer.backend_server_descriptions | length == 1
+ - result.load_balancer.backend_server_descriptions[0].policy_names == ["ProxyProtocol-policy"]
+
+- name: Enable proxy protocol on a listener - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ proxied_listener }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Enable proxy protocol on a listener - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ proxied_listener }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.proxy_policy == "ProxyProtocol-policy"
+ - result.load_balancer.backend_server_descriptions | length == 1
+ - result.load_balancer.backend_server_descriptions[0].policy_names == ["ProxyProtocol-policy"]
+
+# ===========================================================
+
+- name: Disable proxy protocol on a listener (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ unproxied_listener }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Disable proxy protocol on a listener
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ unproxied_listener }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.load_balancer.backend_server_descriptions | length == 0
+
+- name: Disable proxy protocol on a listener - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ unproxied_listener }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Disable proxy protocol on a listener - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ unproxied_listener }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.load_balancer.backend_server_descriptions | length == 0
+
+# ===========================================================
+
+- name: Re-enable proxy protocol on a listener (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ proxied_listener }}"
+ purge_listeners: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Re-enable proxy protocol on a listener
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ listeners: "{{ proxied_listener }}"
+ purge_listeners: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.proxy_policy == "ProxyProtocol-policy"
+ - result.load_balancer.backend_server_descriptions | length == 1
+ - result.load_balancer.backend_server_descriptions[0].policy_names == ["ProxyProtocol-policy"]
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_securitygroups.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_securitygroups.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_securitygroups.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_securitygroups.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,106 @@
+---
+- name: Assign Security Groups to ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_ids: ['{{ sg_b }}']
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Assign Security Groups to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_ids: ['{{ sg_b }}']
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - sg_a not in result.elb.security_group_ids
+ - sg_b in result.elb.security_group_ids
+ - sg_c not in result.elb.security_group_ids
+
+- name: Assign Security Groups to ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_ids: ['{{ sg_b }}']
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Assign Security Groups to ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_ids: ['{{ sg_b }}']
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - sg_a not in result.elb.security_group_ids
+ - sg_b in result.elb.security_group_ids
+ - sg_c not in result.elb.security_group_ids
+
+#=====================================================================
+
+- name: Assign Security Groups to ELB by name (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_names: ['{{ resource_prefix }}-a', '{{ resource_prefix }}-c']
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Assign Security Groups to ELB by name
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_names: ['{{ resource_prefix }}-a', '{{ resource_prefix }}-c']
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - sg_a in result.elb.security_group_ids
+ - sg_b not in result.elb.security_group_ids
+ - sg_c in result.elb.security_group_ids
+
+- name: Assign Security Groups to ELB by name - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_names: ['{{ resource_prefix }}-a', '{{ resource_prefix }}-c']
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Assign Security Groups to ELB by name - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ security_group_names: ['{{ resource_prefix }}-a', '{{ resource_prefix }}-c']
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - sg_a in result.elb.security_group_ids
+ - sg_b not in result.elb.security_group_ids
+ - sg_c in result.elb.security_group_ids
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_stickiness.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_stickiness.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_stickiness.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_stickiness.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,390 @@
+---
+# ==============================================================
+- name: App Cookie Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: App Cookie Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: App Cookie Stickiness - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: App Cookie Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+# ==============================================================
+- name: Update App Cookie Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_app_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update App Cookie Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_app_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update App Cookie Stickiness - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_app_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Update App Cookie Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_app_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+
+# ==============================================================
+
+- name: Disable Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Disable Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: Disable Stickiness - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Disable Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+# ==============================================================
+
+- name: Re-enable App Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Re-enable App Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: Re-enable App Stickiness (check_mode) - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Re-enable App Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ app_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+# ==============================================================
+- name: LB Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ lb_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: LB Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ lb_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: LB Stickiness - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ lb_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: LB Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ lb_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+# ==============================================================
+- name: Update LB Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update LB Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: Update LB Stickiness - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Update LB Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+
+# ==============================================================
+
+- name: Disable Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Disable Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: Disable Stickiness - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Disable Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness:
+ enabled: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+
+# ==============================================================
+
+- name: Re-enable LB Stickiness (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Re-enable LB Stickiness
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is changed
+
+- name: Re-enable LB Stickiness - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Re-enable LB Stickiness - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ stickiness: "{{ updated_lb_stickiness }}"
+ register: result
+
+- assert:
+ that:
+ - result is not changed
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_tags.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_tags.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_tags.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/tasks/simple_tags.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,141 @@
+---
+# ===========================================================
+# partial tags (no purge)
+# update tags (no purge)
+# update tags (with purge)
+# ===========================================================
+- name: Pass partial tags to ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ partial_tags }}"
+ purge_tags: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Pass partial tags to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ partial_tags }}"
+ purge_tags: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.tags == default_tags
+
+# ===========================================================
+
+- name: Add tags to ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Add tags to ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: false
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.tags == ( default_tags | combine(updated_tags) )
+
+- name: Add tags to ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: false
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Add tags to ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: false
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.tags == ( default_tags | combine(updated_tags) )
+
+# ===========================================================
+
+- name: Purge tags from ELB (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is changed
+
+- name: Purge tags from ELB
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: true
+ register: result
+
+- assert:
+ that:
+ - result is changed
+ - result.elb.tags == updated_tags
+
+- name: Purge tags from ELB - idempotency (check_mode)
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: true
+ register: result
+ check_mode: true
+
+- assert:
+ that:
+ - result is not changed
+
+- name: Purge tags from ELB - idempotency
+ elb_classic_lb:
+ name: "{{ elb_name }}"
+ state: present
+ tags: "{{ updated_tags }}"
+ purge_tags: true
+ register: result
+
+- assert:
+ that:
+ - result is not changed
+ - result.elb.tags == updated_tags
+
+# ===========================================================
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/templates/s3_policy.j2 ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/templates/s3_policy.j2
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/templates/s3_policy.j2 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/templates/s3_policy.j2 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,15 @@
+{
+ "Version": "2012-10-17",
+ "Id": "ELB-Logging-Policy",
+ "Statement": [
+ {
+ "Sid": "ELB-Logging",
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::{{ access_log_account_id }}:root"
+ },
+ "Action": "s3:PutObject",
+ "Resource": "arn:aws:s3:::{{ s3_logging_bucket }}/*"
+ }
+ ]
+}
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/vars/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/vars/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/vars/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/elb_classic_lb/vars/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+---
+# vars file for test_ec2_elb_lb
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group1
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/populate_cache.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/populate_cache.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/populate_cache.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/populate_cache.yml 2021-11-12 18:13:53.000000000 +0000
@@ -3,8 +3,6 @@
connection: local
gather_facts: no
environment: "{{ ansible_test.environment }}"
- collections:
- - community.general
tasks:
- module_defaults:
@@ -30,16 +28,12 @@
# Create new host, add it to inventory and then terminate it without updating the cache
- name: create a new host
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}'
- instance_tags:
- Name: '{{ resource_prefix }}'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}'
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ wait: no
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
register: setup_instance
@@ -48,14 +42,12 @@
always:
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: '{{ sg_id }}'
+ name: '{{ resource_prefix }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
ignore_errors: yes
when: setup_instance is defined
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/setup.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/setup.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/setup.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/setup.yml 2021-11-12 18:13:53.000000000 +0000
@@ -5,7 +5,7 @@
owner-id: '125523088429'
virtualization-type: hvm
root-device-type: ebs
- name: 'Fedora-Atomic-27*'
+ name: 'Fedora-Cloud-Base-34-1.2.x86_64*'
register: fedora_images
- set_fact:
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_concatenation.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_concatenation.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_concatenation.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_concatenation.yml 2021-11-12 18:13:53.000000000 +0000
@@ -18,18 +18,15 @@
# Create new host, refresh inventory
- name: create a new host
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}'
- instance_tags:
- Name: '{{ resource_prefix }}'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}'
+ tags:
OtherTag: value
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
+ wait: no
register: setup_instance
- meta: refresh_inventory
@@ -46,14 +43,12 @@
always:
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: "{{ sg_id }}"
+ name: '{{ resource_prefix }}'
+ security_groups: "{{ sg_id }}"
vpc_subnet_id: "{{ subnet_id }}"
ignore_errors: yes
when: setup_instance is defined
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_constructed.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_constructed.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_constructed.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_constructed.yml 2021-11-12 18:13:53.000000000 +0000
@@ -20,19 +20,16 @@
# Create new host, refresh inventory
- name: create a new host
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}'
- instance_tags:
- Name: '{{ resource_prefix }}'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}'
+ tags:
tag1: value1
tag2: value2
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
+ wait: no
register: setup_instance
- meta: refresh_inventory
@@ -59,14 +56,12 @@
always:
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: "{{ sg_id }}"
+ name: '{{ resource_prefix }}'
+ security_groups: "{{ sg_id }}"
vpc_subnet_id: "{{ subnet_id }}"
ignore_errors: yes
when: setup_instance is defined
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_include_or_exclude_filters.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_include_or_exclude_filters.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_include_or_exclude_filters.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_include_or_exclude_filters.yml 2021-11-12 18:13:53.000000000 +0000
@@ -19,48 +19,39 @@
# Create new host, refresh inventory
- name: create a new host (1/3)
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}_1'
- instance_tags:
- Name: '{{ resource_prefix }}_1'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}_1'
+ tags:
tag_instance1: foo
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
+ wait: no
register: setup_instance_1
- name: create a new host (2/3)
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}_2'
- instance_tags:
- Name: '{{ resource_prefix }}_2'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}_2'
+ tags:
tag_instance2: bar
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
+ wait: no
register: setup_instance_2
- name: create a new host (3/3)
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}_3'
- instance_tags:
- Name: '{{ resource_prefix }}_3'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}_3'
+ tags:
tag_instance2: bar
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
+ wait: no
register: setup_instance_3
- meta: refresh_inventory
@@ -77,40 +68,34 @@
always:
- name: remove setup ec2 instance (1/3)
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance_1.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: "{{ sg_id }}"
+ name: '{{ resource_prefix }}_1'
+ security_groups: "{{ sg_id }}"
vpc_subnet_id: "{{ subnet_id }}"
ignore_errors: yes
when: setup_instance_1 is defined
- name: remove setup ec2 instance (2/3)
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance_2.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: "{{ sg_id }}"
+ name: '{{ resource_prefix }}_2'
+ security_groups: "{{ sg_id }}"
vpc_subnet_id: "{{ subnet_id }}"
ignore_errors: yes
when: setup_instance_2 is defined
- name: remove setup ec2 instance (3/3)
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance_3.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: "{{ sg_id }}"
+ name: '{{ resource_prefix }}_3'
+ security_groups: "{{ sg_id }}"
vpc_subnet_id: "{{ subnet_id }}"
ignore_errors: yes
when: setup_instance_3 is defined
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_use_contrib_script_keys.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_use_contrib_script_keys.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_use_contrib_script_keys.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory_with_use_contrib_script_keys.yml 2021-11-12 18:13:53.000000000 +0000
@@ -18,18 +18,15 @@
# Create new host, refresh inventory
- name: create a new host
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}:/aa'
- instance_tags:
- Name: '{{ resource_prefix }}:/aa'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}:/aa'
+ tags:
OtherTag: value
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
+ wait: no
register: setup_instance
- meta: refresh_inventory
@@ -47,14 +44,12 @@
always:
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: "{{ sg_id }}"
+ name: '{{ resource_prefix }}'
+ security_groups: "{{ sg_id }}"
vpc_subnet_id: "{{ subnet_id }}"
ignore_errors: yes
when: setup_instance is defined
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_populating_inventory.yml 2021-11-12 18:13:53.000000000 +0000
@@ -27,17 +27,13 @@
# Create new host, refresh inventory, remove host, refresh inventory
- name: create a new host
- ec2:
- image: '{{ image_id }}'
- exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}'
- instance_tags:
- Name: '{{ resource_prefix }}'
+ ec2_instance:
+ image_id: '{{ image_id }}'
+ name: '{{ resource_prefix }}'
instance_type: t2.micro
- wait: yes
- group_id: '{{ sg_id }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
+ wait: no
register: setup_instance
- meta: refresh_inventory
@@ -50,14 +46,12 @@
- "groups.aws_ec2.0 == '{{ resource_prefix }}'"
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: '{{ sg_id }}'
+ name: '{{ resource_prefix }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
- meta: refresh_inventory
@@ -71,14 +65,12 @@
always:
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: '{{ sg_id }}'
+ name: '{{ resource_prefix }}'
+ security_groups: '{{ sg_id }}'
vpc_subnet_id: '{{ subnet_id }}'
ignore_errors: yes
when: setup_instance is defined
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_refresh_inventory.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_refresh_inventory.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_refresh_inventory.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/playbooks/test_refresh_inventory.yml 2021-11-12 18:13:53.000000000 +0000
@@ -13,17 +13,14 @@
- "not groups.aws_ec2"
- name: create a new host
- ec2:
- image: "{{ images[aws_region] }}"
+ ec2_instance:
+ image_id: "{{ images[aws_region] }}"
exact_count: 1
- count_tag:
- Name: '{{ resource_prefix }}'
- instance_tags:
- Name: '{{ resource_prefix }}'
+ name: '{{ resource_prefix }}'
instance_type: t2.micro
- wait: yes
- group_id: '{{ setup_sg.group_id }}'
+ security_groups: '{{ setup_sg.security_groups }}'
vpc_subnet_id: '{{ setup_subnet.subnet.id }}'
+ wait: no
register: setup_instance
- meta: refresh_inventory
@@ -36,14 +33,12 @@
- "groups.aws_ec2.0 == '{{ resource_prefix }}'"
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: '{{ setup_sg.group_id }}'
+ name: '{{ resource_prefix }}'
+ security_groups: '{{ setup_sg.security_groups }}'
vpc_subnet_id: '{{ setup_subnet.subnet.id }}'
- meta: refresh_inventory
@@ -56,13 +51,11 @@
always:
- name: remove setup ec2 instance
- ec2:
+ ec2_instance:
instance_type: t2.micro
instance_ids: '{{ setup_instance.instance_ids }}'
state: absent
- wait: yes
- instance_tags:
- Name: '{{ resource_prefix }}'
- group_id: '{{ setup_sg.group_id }}'
+ name: '{{ resource_prefix }}'
+ security_groups: '{{ setup_sg.security_groups }}'
vpc_subnet_id: '{{ setup_subnet.subnet.id }}'
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/templates/inventory_with_constructed.yml.j2 ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/templates/inventory_with_constructed.yml.j2
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/templates/inventory_with_constructed.yml.j2 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_ec2/templates/inventory_with_constructed.yml.j2 2021-11-12 18:13:53.000000000 +0000
@@ -10,7 +10,7 @@
tag:Name:
- '{{ resource_prefix }}'
keyed_groups:
-- key: 'security_groups|community.general.json_query("[].group_id")'
+- key: 'security_groups|map(attribute="group_id")'
prefix: security_groups
- key: tags
prefix: tag
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_rds/templates/inventory_with_constructed.j2 ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_rds/templates/inventory_with_constructed.j2
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_rds/templates/inventory_with_constructed.j2 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/inventory_aws_rds/templates/inventory_with_constructed.j2 2021-11-12 18:13:53.000000000 +0000
@@ -7,7 +7,7 @@
regions:
- '{{ aws_region }}'
keyed_groups:
- - key: 'db_parameter_groups|community.general.json_query("[].db_parameter_group_name")'
+ - key: 'db_parameter_groups|map(attribute="db_parameter_group_name")'
prefix: rds_parameter_group
- key: tags
prefix: tag
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_account_attribute/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_account_attribute/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_account_attribute/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_account_attribute/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1 @@
+cloud/aws
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/tasks/main.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/tasks/main.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/tasks/main.yaml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_secret/tasks/main.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,99 @@
+- set_fact:
+ # As a lookup plugin we don't have access to module_defaults
+ connection_args:
+ region: "{{ aws_region }}"
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ aws_security_token: "{{ security_token | default(omit) }}"
+ no_log: True
+
+- module_defaults:
+ group/aws:
+ region: "{{ aws_region }}"
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ security_token: "{{ security_token | default(omit) }}"
+ collections:
+ - community.aws
+ block:
+ - name: define secret name
+ set_fact:
+ secret_name: "ansible-test-{{ tiny_prefix }}-secret"
+ secret_value: "{{ lookup('password', '/dev/null chars=ascii_lowercase,digits,punctuation length=16') }}"
+ on_missing_secret: "skip"
+ on_deleted_secret: "skip"
+
+ - name: lookup missing secret (skip)
+ set_fact:
+ missing_secret: "{{ lookup('amazon.aws.aws_secret', secret_name, on_missing=on_missing_secret, on_deleted=on_deleted_secret, **connection_args) }}"
+
+ - name: assert that missing_secret is defined
+ assert:
+ that:
+ - missing_secret is defined
+ - missing_secret | list | length == 0
+
+ - name: lookup missing secret (error)
+ set_fact:
+ missing_secret: "{{ lookup('amazon.aws.aws_secret', secret_name, **connection_args) }}"
+ ignore_errors: True
+ register: get_missing_secret
+
+ - name: assert that setting the missing_secret failed
+ assert:
+ that:
+ - get_missing_secret is failed
+
+ - name: create secret "{{ secret_name }}"
+ aws_secret:
+ name: "{{ secret_name }}"
+ secret: "{{ secret_value }}"
+ tags:
+ ansible-test: "aws-tests-integration"
+ state: present
+
+ - name: read secret value
+ set_fact:
+ look_secret: "{{ lookup('amazon.aws.aws_secret', secret_name, **connection_args) }}"
+
+ - name: assert that secret was successfully retrieved
+ assert:
+ that:
+ - look_secret == secret_value
+
+ - name: delete secret
+ aws_secret:
+ name: "{{ secret_name }}"
+ state: absent
+ recovery_window: 7
+
+ - name: lookup deleted secret (skip)
+ set_fact:
+ deleted_secret: "{{ lookup('amazon.aws.aws_secret', secret_name, on_missing=on_missing_secret, on_deleted=on_deleted_secret, **connection_args) }}"
+
+ - name: assert that deleted_secret is defined
+ assert:
+ that:
+ - deleted_secret is defined
+ - deleted_secret | list | length == 0
+
+ - name: lookup deleted secret (error)
+ set_fact:
+ missing_secret: "{{ lookup('amazon.aws.aws_secret', secret_name, **connection_args) }}"
+ ignore_errors: True
+ register: get_deleted_secret
+
+ - name: assert that setting the deleted_secret failed
+ assert:
+ that:
+ - get_deleted_secret is failed
+
+ always:
+
+ # delete secret created
+ - name: delete secret
+ aws_secret:
+ name: "{{ secret_name }}"
+ state: absent
+ recovery_window: 0
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/aliases 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1 @@
+cloud/aws
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/tasks/main.yaml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/tasks/main.yaml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/tasks/main.yaml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/lookup_aws_service_ip_ranges/tasks/main.yaml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,148 @@
+- name: lookup range with no arguments
+ set_fact:
+ no_params: "{{ lookup('amazon.aws.aws_service_ip_ranges') }}"
+
+- name: assert that we're returned a single string
+ assert:
+ that:
+ - no_params is defined
+ - no_params is string
+
+- name: lookup range with wantlist
+ set_fact:
+ want_list: "{{ lookup('amazon.aws.aws_service_ip_ranges', wantlist=True) }}"
+ want_ipv6_list: "{{ lookup('amazon.aws.aws_service_ip_ranges', wantlist=True, ipv6_prefixes=True) }}"
+
+- name: assert that we're returned a list
+ assert:
+ that:
+ - want_list is defined
+ - want_list is iterable
+ - want_list is not string
+ - want_list | length > 1
+ - want_list[0] | ansible.netcommon.ipv4
+ - want_ipv6_list is defined
+ - want_ipv6_list is iterable
+ - want_ipv6_list is not string
+ - want_ipv6_list | length > 1
+ - want_ipv6_list[0] | ansible.netcommon.ipv6
+
+
+- name: lookup range with service
+ set_fact:
+ s3_ips: "{{ lookup('amazon.aws.aws_service_ip_ranges', service='S3', wantlist=True) }}"
+ s3_ipv6s: "{{ lookup('amazon.aws.aws_service_ip_ranges', service='S3', wantlist=True, ipv6_prefixes=True) }}"
+
+- name: assert that we're returned a list
+ assert:
+ that:
+ - s3_ips is defined
+ - s3_ips is iterable
+ - s3_ips is not string
+ - s3_ips | length > 1
+ - s3_ips[0] | ansible.netcommon.ipv4
+ - s3_ipv6s is defined
+ - s3_ipv6s is iterable
+ - s3_ipv6s is not string
+ - s3_ipv6s | length > 1
+ - s3_ipv6s[0] | ansible.netcommon.ipv6
+
+- name: lookup range with a different service
+ set_fact:
+ route53_ips: "{{ lookup('amazon.aws.aws_service_ip_ranges', service='ROUTE53_HEALTHCHECKS', wantlist=True) }}"
+ route53_ipv6s: "{{ lookup('amazon.aws.aws_service_ip_ranges', service='ROUTE53_HEALTHCHECKS', wantlist=True, ipv6_prefixes=True) }}"
+
+- name: assert that we're returned a list
+ assert:
+ that:
+ - route53_ips is defined
+ - route53_ips is iterable
+ - route53_ips is not string
+ - route53_ips | length > 1
+ - route53_ips[0] | ansible.netcommon.ipv4
+ - route53_ipv6s is defined
+ - route53_ipv6s is iterable
+ - route53_ipv6s is not string
+ - route53_ipv6s | length > 1
+ - route53_ipv6s[0] | ansible.netcommon.ipv6
+
+
+- name: assert that service IPV4s and IPV6s do not overlap
+ assert:
+ that:
+ - route53_ips | intersect(s3_ips) | length == 0
+ - route53_ipv6s | intersect(s3_ipv6s) | length == 0
+
+- name: lookup range with region
+ set_fact:
+ us_east_1_ips: "{{ lookup('amazon.aws.aws_service_ip_ranges', region='us-east-1', wantlist=True) }}"
+
+- name: lookup IPV6 range with region
+ set_fact:
+ us_east_1_ipv6s: "{{ lookup('amazon.aws.aws_service_ip_ranges', region='us-east-1', wantlist=True, ipv6_prefixes=True) }}"
+
+- name: assert that we're returned a list
+ assert:
+ that:
+ - us_east_1_ips is defined
+ - us_east_1_ips is iterable
+ - us_east_1_ips is not string
+ - us_east_1_ips | length > 1
+ - us_east_1_ips[0] | ansible.netcommon.ipv4
+ - us_east_1_ipv6s is defined
+ - us_east_1_ipv6s is iterable
+ - us_east_1_ipv6s is not string
+ - us_east_1_ipv6s | length > 1
+ - us_east_1_ipv6s[0] | ansible.netcommon.ipv6
+
+- name: lookup range with a different region
+ set_fact:
+ eu_central_1_ips: "{{ lookup('amazon.aws.aws_service_ip_ranges', region='eu-central-1', wantlist=True) }}"
+ eu_central_1_ipv6s: "{{ lookup('amazon.aws.aws_service_ip_ranges', region='eu-central-1', wantlist=True, ipv6_prefixes=True) }}"
+
+- name: assert that we're returned a list
+ assert:
+ that:
+ - eu_central_1_ips is defined
+ - eu_central_1_ips is iterable
+ - eu_central_1_ips is not string
+ - eu_central_1_ips | length > 1
+ - eu_central_1_ips[0] | ansible.netcommon.ipv4
+ - eu_central_1_ipv6s is defined
+ - eu_central_1_ipv6s is iterable
+ - eu_central_1_ipv6s is not string
+ - eu_central_1_ipv6s | length > 1
+ - eu_central_1_ipv6s[0] | ansible.netcommon.ipv6
+
+- name: assert that regional IPs don't overlap
+ assert:
+ that:
+ - eu_central_1_ips | intersect(us_east_1_ips) | length == 0
+ - eu_central_1_ipv6s | intersect(us_east_1_ipv6s) | length == 0
+
+- name: lookup range with service and region
+ set_fact:
+ s3_us_ips: "{{ lookup('amazon.aws.aws_service_ip_ranges', region='us-east-1', service='S3', wantlist=True) }}"
+ s3_us_ipv6s: "{{ lookup('amazon.aws.aws_service_ip_ranges', region='us-east-1', service='S3', wantlist=True, ipv6_prefixes=True) }}"
+
+- name: assert that we're returned a list
+ assert:
+ that:
+ - s3_us_ips is defined
+ - s3_us_ips is iterable
+ - s3_us_ips is not string
+ - s3_us_ips | length > 1
+ - s3_us_ips[0] | ansible.netcommon.ipv4
+ - s3_us_ipv6s is defined
+ - s3_us_ipv6s is iterable
+ - s3_us_ipv6s is not string
+ - s3_us_ipv6s | length > 1
+ - s3_us_ipv6s[0] | ansible.netcommon.ipv6
+
+- name: assert that the regional service IPs are a subset of the regional IPs and service IPs.
+ assert:
+ that:
+ - ( s3_us_ips | intersect(us_east_1_ips) | length ) == ( s3_us_ips | length )
+ - ( s3_us_ips | intersect(s3_ips) | length ) == ( s3_us_ips | length )
+ - ( s3_us_ipv6s | intersect(us_east_1_ipv6s) | length ) == ( s3_us_ipv6s | length )
+ - ( s3_us_ipv6s | intersect(s3_ipv6s) | length ) == ( s3_us_ipv6s | length )
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_core/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_core/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_core/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_core/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_ec2/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_ec2/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_ec2/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_ec2/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group4
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_waiter/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_waiter/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_waiter/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/module_utils_waiter/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group2
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/aliases ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/aliases
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/aliases 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/aliases 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1 @@
cloud/aws
-shippable/aws/group1
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/inventory ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/inventory
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/inventory 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/inventory 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,5 @@
[tests]
+ownership_controls
missing
simple
complex
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/meta/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,3 @@
dependencies:
- - prepare_tests
- - setup_ec2
- - setup_remote_tmp_dir
+ - role: prepare_tests
+ - role: setup_botocore_pip
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/meta/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/meta/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/meta/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/meta/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,2 @@
dependencies:
- - prepare_tests
- - setup_ec2
- - setup_remote_tmp_dir
+ - role: prepare_tests
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/complex.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/complex.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/complex.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/complex.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,8 +1,10 @@
---
- block:
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}complex"
- name: 'Create more complex s3_bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: "{{ local_bucket_name }}"
state: present
policy: "{{ lookup('template','policy.json') }}"
requester_pays: yes
@@ -15,7 +17,7 @@
- assert:
that:
- output is changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.requester_pays
- output.versioning.MfaDelete == 'Disabled'
- output.versioning.Versioning == 'Enabled'
@@ -24,7 +26,7 @@
- output.policy.Statement[0].Action == 's3:GetObject'
- output.policy.Statement[0].Effect == 'Allow'
- output.policy.Statement[0].Principal == '*'
- - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ bucket_name }}/*'
+ - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ local_bucket_name }}/*'
- output.policy.Statement[0].Sid == 'AddPerm'
# ============================================================
@@ -36,7 +38,7 @@
- name: 'Try to update the same complex s3_bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
policy: "{{ lookup('template','policy.json') }}"
requester_pays: yes
@@ -49,7 +51,7 @@
- assert:
that:
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.requester_pays
- output.versioning.MfaDelete == 'Disabled'
- output.versioning.Versioning == 'Enabled'
@@ -58,13 +60,13 @@
- output.policy.Statement[0].Action == 's3:GetObject'
- output.policy.Statement[0].Effect == 'Allow'
- output.policy.Statement[0].Principal == '*'
- - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ bucket_name }}/*'
+ - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ local_bucket_name }}/*'
- output.policy.Statement[0].Sid == 'AddPerm'
# ============================================================
- name: 'Update bucket policy on complex bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
policy: "{{ lookup('template','policy-updated.json') }}"
requester_pays: yes
@@ -80,7 +82,7 @@
- output.policy.Statement[0].Action == 's3:GetObject'
- output.policy.Statement[0].Effect == 'Deny'
- output.policy.Statement[0].Principal.AWS == '*'
- - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ bucket_name }}/*'
+ - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ local_bucket_name }}/*'
- output.policy.Statement[0].Sid == 'AddPerm'
# ============================================================
@@ -92,7 +94,7 @@
- name: Update attributes for s3_bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
policy: "{{ lookup('template','policy.json') }}"
requester_pays: no
@@ -105,7 +107,7 @@
- assert:
that:
- output is changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- not output.requester_pays
- output.versioning.MfaDelete == 'Disabled'
- output.versioning.Versioning in ['Suspended', 'Disabled']
@@ -114,12 +116,12 @@
- output.policy.Statement[0].Action == 's3:GetObject'
- output.policy.Statement[0].Effect == 'Allow'
- output.policy.Statement[0].Principal == '*'
- - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ bucket_name }}/*'
+ - output.policy.Statement[0].Resource == 'arn:aws:s3:::{{ local_bucket_name }}/*'
- output.policy.Statement[0].Sid == 'AddPerm'
- name: 'Delete complex test bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -129,7 +131,7 @@
- name: 'Re-delete complex test bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -141,6 +143,6 @@
always:
- name: 'Ensure all buckets are deleted'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/dotted.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/dotted.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/dotted.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/dotted.yml 2021-11-12 18:13:53.000000000 +0000
@@ -2,20 +2,21 @@
- block:
- name: 'Ensure bucket_name contains a .'
set_fact:
- bucket_name: '{{ bucket_name }}.something'
+ local_bucket_name: "{{ bucket_name | hash('md5')}}.dotted"
+
# ============================================================
#
- name: 'Create bucket with dot in name'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
register: output
- assert:
that:
- output is changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
# ============================================================
@@ -27,7 +28,7 @@
- name: 'Delete s3_bucket with dot in name'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -37,7 +38,7 @@
- name: 'Re-delete s3_bucket with dot in name'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -49,6 +50,6 @@
always:
- name: 'Ensure all buckets are deleted'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_kms.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_kms.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_kms.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_kms.yml 2021-11-12 18:13:53.000000000 +0000
@@ -6,18 +6,19 @@
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
-
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}e-kms"
# ============================================================
- name: 'Create a simple bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
register: output
- name: 'Enable aws:kms encryption with KMS master key'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: "aws:kms"
register: output
@@ -30,7 +31,7 @@
- name: 'Re-enable aws:kms encryption with KMS master key (idempotent)'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: "aws:kms"
register: output
@@ -45,7 +46,7 @@
- name: Disable encryption from bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: "none"
register: output
@@ -57,7 +58,7 @@
- name: Disable encryption from bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: "none"
register: output
@@ -71,7 +72,7 @@
- name: Delete encryption test s3 bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -83,6 +84,6 @@
always:
- name: Ensure all buckets are deleted
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_sse.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_sse.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_sse.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/encryption_sse.yml 2021-11-12 18:13:53.000000000 +0000
@@ -6,18 +6,19 @@
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
-
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}e-sse"
# ============================================================
- name: 'Create a simple bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
register: output
- name: 'Enable AES256 encryption'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: 'AES256'
register: output
@@ -30,7 +31,7 @@
- name: 'Re-enable AES256 encryption (idempotency)'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: 'AES256'
register: output
@@ -45,7 +46,7 @@
- name: Disable encryption from bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: "none"
register: output
@@ -57,7 +58,7 @@
- name: Disable encryption from bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
encryption: "none"
register: output
@@ -71,7 +72,7 @@
- name: Delete encryption test s3 bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -83,6 +84,6 @@
always:
- name: Ensure all buckets are deleted
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/missing.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/missing.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/missing.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/missing.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,6 +1,8 @@
---
- name: 'Attempt to delete non-existent buckets'
block:
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}-missing"
# ============================================================
#
# While in theory the 'simple' test case covers this there are
@@ -8,7 +10,7 @@
#
- name: 'Delete non-existstent s3_bucket (never created)'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -21,6 +23,6 @@
always:
- name: 'Ensure all buckets are deleted'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/ownership_controls.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/ownership_controls.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/ownership_controls.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/ownership_controls.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,127 @@
+---
+- module_defaults:
+ group/aws:
+ aws_access_key: "{{ aws_access_key }}"
+ aws_secret_key: "{{ aws_secret_key }}"
+ security_token: "{{ security_token | default(omit) }}"
+ region: "{{ aws_region }}"
+ block:
+ - include_role:
+ name: setup_botocore_pip
+ vars:
+ botocore_version: '1.18.11'
+
+ # ============================================================
+ - name: Wrap test in virtualenv
+ vars:
+ ansible_python_interpreter: "{{ botocore_virtualenv_interpreter }}"
+ block:
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}ownership"
+
+ - name: 'Create a simple bucket bad value for ownership controls'
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ object_ownership: default
+ ignore_errors: true
+ register: output
+
+ - assert:
+ that:
+ - output.failed
+
+ - name: 'Create bucket with object_ownership set to object_writer'
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ ignore_errors: true
+ register: output
+
+ - assert:
+ that:
+ - output.changed
+ - not output.object_ownership|bool
+
+ - name: delete s3 bucket
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: absent
+
+ - name: 'create s3 bucket with object ownership controls'
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ object_ownership: ObjectWriter
+ register: output
+
+ - assert:
+ that:
+ - output.changed
+ - output.object_ownership
+ - output.object_ownership == 'ObjectWriter'
+
+ - name: 'update s3 bucket ownership controls'
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ object_ownership: BucketOwnerPreferred
+ register: output
+
+ - assert:
+ that:
+ - output.changed
+ - output.object_ownership
+ - output.object_ownership == 'BucketOwnerPreferred'
+
+ - name: 'test idempotency update s3 bucket ownership controls'
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ object_ownership: BucketOwnerPreferred
+ register: output
+
+ - assert:
+ that:
+ - output.changed is false
+ - output.object_ownership
+ - output.object_ownership == 'BucketOwnerPreferred'
+
+ - name: 'delete s3 bucket ownership controls'
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ delete_object_ownership: true
+ register: output
+
+ - assert:
+ that:
+ - output.changed
+ - not output.object_ownership|bool
+
+ - name: 'delete s3 bucket ownership controls once again (idempotency)'
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ delete_object_ownership: true
+ register: idempotency
+
+ - assert:
+ that:
+ - not idempotency.changed
+ - not idempotency.object_ownership|bool
+
+ # ============================================================
+ always:
+ - name: delete s3 bucket ownership controls
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: present
+ delete_object_ownership: true
+ ignore_errors: yes
+
+ - name: Ensure all buckets are deleted
+ s3_bucket:
+ name: '{{ local_bucket_name }}'
+ state: absent
+ ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/public_access.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/public_access.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/public_access.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/public_access.yml 2021-11-12 18:13:53.000000000 +0000
@@ -6,12 +6,13 @@
security_token: "{{ security_token | default(omit) }}"
region: "{{ aws_region }}"
block:
-
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}-public"
# ============================================================
- name: 'Create a simple bucket with public access block configuration'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
public_access:
block_public_acls: true
@@ -31,7 +32,7 @@
- name: 'Re-configure public access block configuration'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
public_access:
block_public_acls: true
@@ -51,7 +52,7 @@
- name: 'Re-configure public access block configuration (idempotency)'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
public_access:
block_public_acls: true
@@ -71,7 +72,7 @@
- name: 'Delete public access block configuration'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
delete_public_access: true
register: output
@@ -83,7 +84,7 @@
- name: 'Delete public access block configuration (idempotency)'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
delete_public_access: true
register: output
@@ -97,7 +98,7 @@
- name: Delete testing s3 bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -109,6 +110,6 @@
always:
- name: Ensure all buckets are deleted
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/simple.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/simple.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/simple.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/simple.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,12 +1,14 @@
---
- name: 'Run simple tests'
block:
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}-simple"
# Note: s3_bucket doesn't support check_mode
# ============================================================
- name: 'Create a simple s3_bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
register: output
@@ -14,14 +16,14 @@
that:
- output is success
- output is changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- not output.requester_pays
- output.public_access is undefined
# ============================================================
- name: 'Try to update the simple bucket with the same values'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
register: output
@@ -29,13 +31,13 @@
that:
- output is success
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- not output.requester_pays
# ============================================================
- name: 'Delete the simple s3_bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -47,7 +49,7 @@
# ============================================================
- name: 'Re-delete the simple s3_bucket (idempotency)'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -60,6 +62,6 @@
always:
- name: 'Ensure all buckets are deleted'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/tags.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/tags.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/tags.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/tasks/tags.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,24 +1,25 @@
---
- name: 'Run tagging tests'
block:
-
+ - set_fact:
+ local_bucket_name: "{{ bucket_name | hash('md5')}}-tags"
# ============================================================
- name: 'Create simple s3_bucket for testing tagging'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
register: output
- assert:
that:
- output.changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
# ============================================================
- name: 'Add tags to s3 bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
tags:
example: tag1
@@ -28,13 +29,13 @@
- assert:
that:
- output.changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- output.tags.another == 'tag2'
- name: 'Re-Add tags to s3 bucket'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
tags:
example: tag1
@@ -44,7 +45,7 @@
- assert:
that:
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- output.tags.another == 'tag2'
@@ -52,7 +53,7 @@
- name: Remove a tag from an s3_bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
tags:
example: tag1
@@ -61,13 +62,13 @@
- assert:
that:
- output.changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- "'another' not in output.tags"
- name: Re-remove the tag from an s3_bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
tags:
example: tag1
@@ -76,7 +77,7 @@
- assert:
that:
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- "'another' not in output.tags"
@@ -91,7 +92,7 @@
- name: 'Add a tag for s3_bucket with purge_tags False'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
purge_tags: no
tags:
@@ -101,13 +102,13 @@
- assert:
that:
- output.changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- output.tags.anewtag == 'here'
- name: 'Re-add a tag for s3_bucket with purge_tags False'
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
purge_tags: no
tags:
@@ -117,7 +118,7 @@
- assert:
that:
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- output.tags.anewtag == 'here'
@@ -132,7 +133,7 @@
- name: Update a tag for s3_bucket with purge_tags False
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
purge_tags: no
tags:
@@ -142,13 +143,13 @@
- assert:
that:
- output.changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- output.tags.anewtag == 'next'
- name: Re-update a tag for s3_bucket with purge_tags False
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
purge_tags: no
tags:
@@ -158,7 +159,7 @@
- assert:
that:
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- output.tags.anewtag == 'next'
@@ -173,7 +174,7 @@
- name: Pass empty tags dict for s3_bucket with purge_tags False
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
purge_tags: no
tags: {}
@@ -182,7 +183,7 @@
- assert:
that:
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
- output.tags.anewtag == 'next'
@@ -197,21 +198,21 @@
- name: Do not specify any tag to ensure previous tags are not removed
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
register: output
- assert:
that:
- not output.changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags.example == 'tag1'
# ============================================================
- name: Remove all tags
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
tags: {}
register: output
@@ -219,12 +220,12 @@
- assert:
that:
- output.changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags == {}
- name: Re-remove all tags
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: present
tags: {}
register: output
@@ -232,14 +233,14 @@
- assert:
that:
- output is not changed
- - output.name == '{{ bucket_name }}'
+ - output.name == '{{ local_bucket_name }}'
- output.tags == {}
# ============================================================
- name: Delete bucket
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
register: output
@@ -251,6 +252,6 @@
always:
- name: Ensure all buckets are deleted
s3_bucket:
- name: '{{ bucket_name }}'
+ name: '{{ local_bucket_name }}'
state: absent
ignore_errors: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy.json ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy.json
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy.json 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy.json 2021-11-12 18:13:53.000000000 +0000
@@ -6,7 +6,7 @@
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
- "Resource":["arn:aws:s3:::{{bucket_name}}/*"]
+ "Resource":["arn:aws:s3:::{{local_bucket_name}}/*"]
}
]
}
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy-updated.json ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy-updated.json
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy-updated.json 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/s3_bucket/roles/s3_bucket/templates/policy-updated.json 2021-11-12 18:13:53.000000000 +0000
@@ -6,7 +6,7 @@
"Effect":"Deny",
"Principal": {"AWS": "*"},
"Action":["s3:GetObject"],
- "Resource":["arn:aws:s3:::{{bucket_name}}/*"]
+ "Resource":["arn:aws:s3:::{{local_bucket_name}}/*"]
}
]
}
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+default_botocore_version: '1.18.0'
+default_boto3_version: '1.15.0'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/handlers/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/handlers/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/handlers/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/handlers/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+- name: 'Delete temporary pip environment'
+ include_tasks: cleanup.yml
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/cleanup.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/cleanup.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/cleanup.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/cleanup.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,5 @@
+- name: 'Delete temporary pip environment'
+ file:
+ path: "{{ botocore_pip_directory }}"
+ state: absent
+ no_log: yes
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_botocore_pip/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,42 @@
+- name: 'Ensure that we have virtualenv available to us'
+ pip:
+ name: virtualenv
+
+- name: 'Create temporary directory for pip environment'
+ tempfile:
+ state: directory
+ prefix: botocore
+ suffix: .test
+ register: botocore_pip_directory
+ notify:
+ - 'Delete temporary pip environment'
+
+- name: 'Record temporary directory'
+ set_fact:
+ botocore_pip_directory: "{{ botocore_pip_directory.path }}"
+
+- set_fact:
+ botocore_virtualenv: "{{ botocore_pip_directory }}/virtualenv"
+ botocore_virtualenv_command: "{{ ansible_python_interpreter }} -m virtualenv"
+
+- set_fact:
+ botocore_virtualenv_interpreter: "{{ botocore_virtualenv }}/bin/python"
+
+- pip:
+ name:
+ - 'boto3{{ _boto3_comparison }}{{ _boto3_version }}'
+ - 'botocore{{ _botocore_comparison }}{{ _botocore_version }}'
+ - 'coverage<5'
+ virtualenv: "{{ botocore_virtualenv }}"
+ virtualenv_command: "{{ botocore_virtualenv_command }}"
+ virtualenv_site_packages: no
+ vars:
+ _boto3_version: '{{ boto3_version | default(default_boto3_version) }}'
+ _botocore_version: '{{ botocore_version | default(default_botocore_version) }}'
+ _is_default_boto3: '{{ _boto3_version == default_boto3_version }}'
+ _is_default_botocore: '{{ _botocore_version == default_botocore_version }}'
+ # Only set the default to >= if the other dep has been updated and the dep has not been set
+ _default_boto3_comparison: '{% if _is_default_boto3 and not _is_default_botocore %}>={% else %}=={% endif %}'
+ _default_botocore_comparison: '{% if _is_default_botocore and not _is_default_boto3 %}>={% else %}=={% endif %}'
+ _boto3_comparison: '{{ boto3_comparison | default(_default_boto3_comparison) }}'
+ _botocore_comparison: '{{ botocore_comparison | default(_default_botocore_comparison) }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/defaults/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/defaults/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/defaults/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/defaults/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,3 @@
+ec2_ami_name: 'Fedora-Cloud-Base-*.x86_64*'
+ec2_ami_owner_id: '125523088429'
+ec2_ami_ssh_user: 'fedora'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/tasks/main.yml 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_ec2_facts/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,53 @@
+---
+# Setup a couple of common facts about the AWS Region
+#
+# Information about availablity zones
+# - ec2_availability_zone_names
+#
+# An EC2 AMI that can be used for spinning up Instances performs as search
+# rather than hardcoding the IDs so we're not limited to specific Regions
+# - ec2_ami_id
+#
+- module_defaults:
+ group/aws:
+ aws_access_key: '{{ aws_access_key }}'
+ aws_secret_key: '{{ aws_secret_key }}'
+ security_token: '{{ security_token | default(omit) }}'
+ region: '{{ aws_region }}'
+
+ run_once: True
+ block:
+ # ============================================================
+
+ - name: Get available AZs
+ aws_az_info:
+ filters:
+ region-name: '{{ aws_region }}'
+ register: _az_info
+
+ - name: Pick an AZ
+ set_fact:
+ ec2_availability_zone_names: '{{ _az_info.availability_zones | selectattr("zone_name", "defined") | map(attribute="zone_name") | list }}'
+
+ # ============================================================
+
+ - name: Get a list of images
+ ec2_ami_info:
+ filters:
+ name: '{{ ec2_ami_name }}'
+ owner-id: '{{ ec2_ami_owner_id }}'
+ architecture: x86_64
+ virtualization-type: hvm
+ root-device-type: ebs
+ register: _images_info
+ # Very spammy
+ no_log: True
+
+ - name: Set Fact for latest AMI
+ vars:
+ latest_image: '{{ _images_info.images | sort(attribute="creation_date") | reverse | first }}'
+ set_fact:
+ ec2_ami_id: '{{ latest_image.image_id }}'
+ ec2_ami_details: '{{ latest_image }}'
+ ec2_ami_root_disk: '{{ latest_image.block_device_mappings[0].device_name }}'
+ ec2_ami_ssh_user: '{{ ec2_ami_ssh_user }}'
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/files/ec2-fingerprint.py ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/files/ec2-fingerprint.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/files/ec2-fingerprint.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/files/ec2-fingerprint.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,33 @@
+#!/usr/bin/env python
+"""
+Reads an OpenSSH Public key and spits out the 'AWS' MD5 sum
+The equivalent of
+
+ssh-keygen -f id_rsa.pub -e -m PKCS8 | openssl pkey -pubin -outform DER | openssl md5 -c | cut -f 2 -d ' '
+
+(but without needing the OpenSSL CLI)
+"""
+
+from __future__ import absolute_import, division, print_function
+__metaclass__ = type
+
+import hashlib
+import sys
+from Crypto.PublicKey import RSA
+
+if len(sys.argv) == 0:
+ ssh_public_key = "id_rsa.pub"
+else:
+ ssh_public_key = sys.argv[1]
+
+with open(ssh_public_key, 'r') as key_fh:
+ data = key_fh.read()
+
+# Convert from SSH format to DER format
+public_key = RSA.importKey(data).exportKey('DER')
+md5digest = hashlib.md5(public_key).hexdigest()
+# Format the md5sum into the normal format
+pairs = zip(md5digest[::2], md5digest[1::2])
+md5string = ":".join(["".join(pair) for pair in pairs])
+
+print(md5string)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/tasks/main.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/tasks/main.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/tasks/main.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/integration/targets/setup_sshkey/tasks/main.yml 2021-11-12 18:13:53.000000000 +0000
@@ -15,41 +15,57 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see .
-- name: create a temp file
+- name: create a temp dir
tempfile:
- state: file
- register: sshkey_file
+ state: directory
+ register: sshkey_dir
tags:
- prepare
+- name: ensure script is available
+ copy:
+ src: ec2-fingerprint.py
+ dest: '{{ sshkey_dir.path }}/ec2-fingerprint.py'
+ mode: 0700
+ tags:
+ - prepare
+
+- name: Set location of SSH keys
+ set_fact:
+ sshkey: '{{ sshkey_dir.path }}/key_one'
+ another_sshkey: '{{ sshkey_dir.path }}/key_two'
+ sshkey_pub: '{{ sshkey_dir.path }}/key_one.pub'
+ another_sshkey_pub: '{{ sshkey_dir.path }}/key_two.pub'
+
- name: generate sshkey
- shell: echo 'y' | ssh-keygen -P '' -f {{ sshkey_file.path }}
+ shell: echo 'y' | ssh-keygen -P '' -f '{{ sshkey }}'
tags:
- prepare
-- name: create another temp file
- tempfile:
- state: file
- register: another_sshkey_file
+- name: record fingerprint
+ shell: '{{ sshkey_dir.path }}/ec2-fingerprint.py {{ sshkey_pub }}'
+ register: fingerprint
tags:
- prepare
- name: generate another_sshkey
- shell: echo 'y' | ssh-keygen -P '' -f {{ another_sshkey_file.path }}
+ shell: echo 'y' | ssh-keygen -P '' -f {{ another_sshkey }}
tags:
- prepare
-- name: record fingerprint
- shell: openssl rsa -in {{ sshkey_file.path }} -pubout -outform DER 2>/dev/null | openssl md5 -c
- register: fingerprint
+- name: record another fingerprint
+ shell: '{{ sshkey_dir.path }}/ec2-fingerprint.py {{ another_sshkey_pub }}'
+ register: another_fingerprint
tags:
- prepare
- name: set facts for future roles
set_fact:
- sshkey: '{{ sshkey_file.path }}'
- key_material: "{{ lookup('file', sshkey_file.path ~ '.pub') }}"
- another_key_material: "{{ lookup('file', another_sshkey_file.path ~ '.pub') }}"
- fingerprint: '{{ fingerprint.stdout.split()[1] }}'
+ # Public SSH keys (OpenSSH format)
+ key_material: "{{ lookup('file', sshkey_pub) }}"
+ another_key_material: "{{ lookup('file', another_sshkey_pub) }}"
+ # AWS 'fingerprint' (md5digest)
+ fingerprint: '{{ fingerprint.stdout }}'
+ another_fingerprint: '{{ another_fingerprint.stdout }}'
tags:
- prepare
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/requirements.yml ansible-5.2.0/ansible_collections/amazon/aws/tests/requirements.yml
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/requirements.yml 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/requirements.yml 2021-11-12 18:13:53.000000000 +0000
@@ -1,4 +1,4 @@
integration_tests_dependencies:
- ansible.windows
-- community.general
+- ansible.netcommon # ipv6 filter
unit_tests_dependencies: []
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.10.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.10.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.10.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.10.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,6 +1,2 @@
plugins/modules/ec2_tag.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
plugins/modules/ec2_vol.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
-plugins/module_utils/compat/_ipaddress.py no-assert # Vendored library
-plugins/module_utils/compat/_ipaddress.py no-unicode-literals # Vendored library
-tests/utils/shippable/check_matrix.py replace-urlopen # Standalone script used as part of testing
-tests/utils/shippable/timing.py shebang # Standalone script used as part of testing
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.11.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.11.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.11.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.11.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,7 +1,2 @@
plugins/modules/ec2_tag.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
plugins/modules/ec2_vol.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
-plugins/module_utils/compat/_ipaddress.py no-assert # Vendored library
-plugins/module_utils/compat/_ipaddress.py no-unicode-literals # Vendored library
-plugins/module_utils/core.py pylint:property-with-parameters # Breaking change required to fix - https://github.com/ansible-collections/amazon.aws/pull/290
-tests/utils/shippable/check_matrix.py replace-urlopen # Standalone script used as part of testing
-tests/utils/shippable/timing.py shebang # Standalone script used as part of testing
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.12.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.12.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.12.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.12.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,9 +1,2 @@
-plugins/inventory/aws_ec2.py pylint:use-a-generator # (new test) Should be an easy fix but not worth blocking gating
-plugins/modules/ec2_group.py pylint:use-a-generator # (new test) Should be an easy fix but not worth blocking gating
plugins/modules/ec2_tag.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
plugins/modules/ec2_vol.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
-plugins/module_utils/compat/_ipaddress.py no-assert # Vendored library
-plugins/module_utils/compat/_ipaddress.py no-unicode-literals # Vendored library
-plugins/module_utils/core.py pylint:property-with-parameters # Breaking change required to fix - https://github.com/ansible-collections/amazon.aws/pull/290
-tests/utils/shippable/check_matrix.py replace-urlopen # Standalone script used as part of testing
-tests/utils/shippable/timing.py shebang # Standalone script used as part of testing
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.13.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.13.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.13.txt 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.13.txt 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,2 @@
+plugins/modules/ec2_tag.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
+plugins/modules/ec2_vol.py validate-modules:parameter-state-invalid-choice # Deprecated choice that can't be removed until 2022
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.9.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.9.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.9.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/sanity/ignore-2.9.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,19 +1,24 @@
+plugins/module_utils/ec2.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/aws_az_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/aws_caller_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/cloudformation_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
+plugins/modules/ec2.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
+plugins/modules/ec2.py validate-modules:deprecation-mismatch # Ansible 2.9 docs don't support deprecation properly
+plugins/modules/ec2.py validate-modules:invalid-documentation # Ansible 2.9 docs don't support deprecation properly
plugins/modules/ec2_ami_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_eni_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_group_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
-plugins/modules/ec2.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
+plugins/modules/ec2_instance_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_snapshot_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_tag.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
-plugins/modules/ec2_vol_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_vol.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
+plugins/modules/ec2_vol_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
+plugins/modules/ec2_vpc_dhcp_option.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_vpc_dhcp_option_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
+plugins/modules/ec2_vpc_igw_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_vpc_net_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
+plugins/modules/ec2_vpc_route_table_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
plugins/modules/ec2_vpc_subnet_info.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
-plugins/module_utils/compat/_ipaddress.py no-assert # Vendored library
-plugins/module_utils/compat/_ipaddress.py no-unicode-literals # Vendored library
-plugins/module_utils/ec2.py pylint:ansible-deprecated-no-version # We use dates for deprecations, Ansible 2.9 only supports this for compatability
-tests/utils/shippable/check_matrix.py replace-urlopen # Standalone script used as part of testing
-tests/utils/shippable/timing.py shebang # Standalone script used as part of testing
+plugins/modules/ec2_vpc_endpoint.py pylint:ansible-deprecated-no-version
+plugins/modules/ec2_vpc_endpoint_info.py pylint:ansible-deprecated-no-version
+plugins/modules/ec2_vpc_nat_gateway_info.py pylint:ansible-deprecated-no-version
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/constraints.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/constraints.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/constraints.txt 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/constraints.txt 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,7 @@
+# Specifically run tests against the oldest versions that we support
+boto3==1.15.0
+botocore==1.18.0
+
+# AWS CLI has `botocore==` dependencies, provide the one that matches botocore
+# to avoid needing to download over a years worth of awscli wheels.
+awscli==1.18.141
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_minimal_versions.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_minimal_versions.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_minimal_versions.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_minimal_versions.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,182 @@
+# (c) 2020 Red Hat Inc.
+#
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+import pytest
+import botocore
+import boto3
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+from pprint import pprint
+
+
+class TestMinimalVersions(object):
+ # ========================================================
+ # Prepare some data for use in our testing
+ # ========================================================
+ def setup_method(self):
+ self.MINIMAL_BOTO3 = '1.15.0'
+ self.MINIMAL_BOTOCORE = '1.18.0'
+ self.OLD_BOTO3 = '1.14.999'
+ self.OLD_BOTOCORE = '1.17.999'
+
+ # ========================================================
+ # Test we don't warn when using valid versions
+ # ========================================================
+ @pytest.mark.parametrize("stdin", [{}], indirect=["stdin"])
+ def test_no_warn(self, monkeypatch, stdin, capfd):
+ monkeypatch.setattr(botocore, "__version__", self.MINIMAL_BOTOCORE)
+ monkeypatch.setattr(boto3, "__version__", self.MINIMAL_BOTO3)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ assert return_val.get("failed") is None
+ assert return_val.get("error") is None
+ assert return_val.get("warnings") is None
+
+ # ========================================================
+ # Test we don't warn when botocore/boto3 isn't required
+ # ========================================================
+ @pytest.mark.parametrize("stdin", [{}], indirect=["stdin"])
+ def test_no_check(self, monkeypatch, stdin, capfd):
+ monkeypatch.setattr(botocore, "__version__", self.OLD_BOTOCORE)
+ monkeypatch.setattr(boto3, "__version__", self.OLD_BOTO3)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict(), check_boto3=False)
+
+ with pytest.raises(SystemExit) as e:
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ assert return_val.get("failed") is None
+ assert return_val.get("error") is None
+ assert return_val.get("warnings") is None
+
+ # ========================================================
+ # Test we warn when using an old version of boto3
+ # ========================================================
+ @pytest.mark.parametrize("stdin", [{}], indirect=["stdin"])
+ def test_warn_boto3(self, monkeypatch, stdin, capfd):
+ monkeypatch.setattr(botocore, "__version__", self.MINIMAL_BOTOCORE)
+ monkeypatch.setattr(boto3, "__version__", self.OLD_BOTO3)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ pprint(out)
+ pprint(err)
+ pprint(return_val)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ assert return_val.get("failed") is None
+ assert return_val.get("error") is None
+ assert return_val.get("warnings") is not None
+ warnings = return_val.get("warnings")
+ assert len(warnings) == 1
+ # Assert that we have a warning about the version but be
+ # relaxed about the exact message
+ assert 'boto3' in warnings[0]
+ assert self.MINIMAL_BOTO3 in warnings[0]
+
+ # ========================================================
+ # Test we warn when using an old version of botocore
+ # ========================================================
+ @pytest.mark.parametrize("stdin", [{}], indirect=["stdin"])
+ def test_warn_botocore(self, monkeypatch, stdin, capfd):
+ monkeypatch.setattr(botocore, "__version__", self.OLD_BOTOCORE)
+ monkeypatch.setattr(boto3, "__version__", self.MINIMAL_BOTO3)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ pprint(out)
+ pprint(err)
+ pprint(return_val)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ assert return_val.get("failed") is None
+ assert return_val.get("error") is None
+ assert return_val.get("warnings") is not None
+ warnings = return_val.get("warnings")
+ assert len(warnings) == 1
+ # Assert that we have a warning about the version but be
+ # relaxed about the exact message
+ assert 'botocore' in warnings[0]
+ assert self.MINIMAL_BOTOCORE in warnings[0]
+
+ # ========================================================
+ # Test we warn when using an old version of botocore and boto3
+ # ========================================================
+ @pytest.mark.parametrize("stdin", [{}], indirect=["stdin"])
+ def test_warn_boto3_and_botocore(self, monkeypatch, stdin, capfd):
+ monkeypatch.setattr(botocore, "__version__", "1.15.999")
+ monkeypatch.setattr(boto3, "__version__", "1.12.999")
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ pprint(out)
+ pprint(err)
+ pprint(return_val)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ assert return_val.get("failed") is None
+ assert return_val.get("error") is None
+ assert return_val.get("warnings") is not None
+
+ warnings = return_val.get("warnings")
+ assert len(warnings) == 2
+
+ warning_dict = dict()
+ for warning in warnings:
+ if 'boto3' in warning:
+ warning_dict['boto3'] = warning
+ if 'botocore' in warning:
+ warning_dict['botocore'] = warning
+
+ # Assert that we have a warning about the version but be
+ # relaxed about the exact message
+ assert warning_dict.get('boto3') is not None
+ assert self.MINIMAL_BOTO3 in warning_dict.get('boto3')
+ assert warning_dict.get('botocore') is not None
+ assert self.MINIMAL_BOTOCORE in warning_dict.get('botocore')
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_require_at_least.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_require_at_least.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_require_at_least.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/ansible_aws_module/test_require_at_least.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,211 @@
+# (c) 2021 Red Hat Inc.
+#
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+import pytest
+import botocore
+import boto3
+import json
+
+from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
+
+DUMMY_VERSION = '5.5.5.5'
+
+TEST_VERSIONS = [
+ ['1.1.1', '2.2.2', True],
+ ['1.1.1', '0.0.1', False],
+ ['9.9.9', '9.9.9', True],
+ ['9.9.9', '9.9.10', True],
+ ['9.9.9', '9.10.9', True],
+ ['9.9.9', '10.9.9', True],
+ ['9.9.9', '9.9.8', False],
+ ['9.9.9', '9.8.9', False],
+ ['9.9.9', '8.9.9', False],
+ ['10.10.10', '10.10.10', True],
+ ['10.10.10', '10.10.11', True],
+ ['10.10.10', '10.11.10', True],
+ ['10.10.10', '11.10.10', True],
+ ['10.10.10', '10.10.9', False],
+ ['10.10.10', '10.9.10', False],
+ ['10.10.10', '9.19.10', False],
+]
+
+
+class TestRequireAtLeast(object):
+ # ========================================================
+ # Prepare some data for use in our testing
+ # ========================================================
+ def setup_method(self):
+ pass
+
+ # ========================================================
+ # Test botocore_at_least
+ # ========================================================
+ @pytest.mark.parametrize("stdin, desired_version, compare_version, at_least", [({}, *d) for d in TEST_VERSIONS], indirect=["stdin"])
+ def test_botocore_at_least(self, monkeypatch, stdin, desired_version, compare_version, at_least, capfd):
+ monkeypatch.setattr(botocore, "__version__", compare_version)
+ # Set boto3 version to a known value (tests are on both sides) to make
+ # sure we're comparing the right library
+ monkeypatch.setattr(boto3, "__version__", DUMMY_VERSION)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ assert at_least == module.botocore_at_least(desired_version)
+
+ # ========================================================
+ # Test boto3_at_least
+ # ========================================================
+ @pytest.mark.parametrize("stdin, desired_version, compare_version, at_least", [({}, *d) for d in TEST_VERSIONS], indirect=["stdin"])
+ def test_boto3_at_least(self, monkeypatch, stdin, desired_version, compare_version, at_least, capfd):
+ # Set botocore version to a known value (tests are on both sides) to make
+ # sure we're comparing the right library
+ monkeypatch.setattr(botocore, "__version__", DUMMY_VERSION)
+ monkeypatch.setattr(boto3, "__version__", compare_version)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ assert at_least == module.boto3_at_least(desired_version)
+
+ # ========================================================
+ # Test require_botocore_at_least
+ # ========================================================
+ @pytest.mark.parametrize("stdin, desired_version, compare_version, at_least", [({}, *d) for d in TEST_VERSIONS], indirect=["stdin"])
+ def test_require_botocore_at_least(self, monkeypatch, stdin, desired_version, compare_version, at_least, capfd):
+ monkeypatch.setattr(botocore, "__version__", compare_version)
+ # Set boto3 version to a known value (tests are on both sides) to make
+ # sure we're comparing the right library
+ monkeypatch.setattr(boto3, "__version__", DUMMY_VERSION)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.require_botocore_at_least(desired_version)
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ if at_least:
+ assert return_val.get("failed") is None
+ else:
+ assert return_val.get("failed")
+ # The message is generated by Ansible, don't test for an exact
+ # message
+ assert desired_version in return_val.get("msg")
+ assert "botocore" in return_val.get("msg")
+ assert return_val.get("boto3_version") == DUMMY_VERSION
+ assert return_val.get("botocore_version") == compare_version
+
+ # ========================================================
+ # Test require_boto3_at_least
+ # ========================================================
+ @pytest.mark.parametrize("stdin, desired_version, compare_version, at_least", [({}, *d) for d in TEST_VERSIONS], indirect=["stdin"])
+ def test_require_boto3_at_least(self, monkeypatch, stdin, desired_version, compare_version, at_least, capfd):
+ monkeypatch.setattr(botocore, "__version__", DUMMY_VERSION)
+ # Set boto3 version to a known value (tests are on both sides) to make
+ # sure we're comparing the right library
+ monkeypatch.setattr(boto3, "__version__", compare_version)
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.require_boto3_at_least(desired_version)
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ if at_least:
+ assert return_val.get("failed") is None
+ else:
+ assert return_val.get("failed")
+ # The message is generated by Ansible, don't test for an exact
+ # message
+ assert desired_version in return_val.get("msg")
+ assert "boto3" in return_val.get("msg")
+ assert return_val.get("botocore_version") == DUMMY_VERSION
+ assert return_val.get("boto3_version") == compare_version
+
+ # ========================================================
+ # Test require_botocore_at_least with reason
+ # ========================================================
+ @pytest.mark.parametrize("stdin, desired_version, compare_version, at_least", [({}, *d) for d in TEST_VERSIONS], indirect=["stdin"])
+ def test_require_botocore_at_least_with_reason(self, monkeypatch, stdin, desired_version, compare_version, at_least, capfd):
+ monkeypatch.setattr(botocore, "__version__", compare_version)
+ # Set boto3 version to a known value (tests are on both sides) to make
+ # sure we're comparing the right library
+ monkeypatch.setattr(boto3, "__version__", DUMMY_VERSION)
+
+ reason = 'testing in progress'
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.require_botocore_at_least(desired_version, reason=reason)
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ if at_least:
+ assert return_val.get("failed") is None
+ else:
+ assert return_val.get("failed")
+ # The message is generated by Ansible, don't test for an exact
+ # message
+ assert desired_version in return_val.get("msg")
+ assert " {0}".format(reason) in return_val.get("msg")
+ assert "botocore" in return_val.get("msg")
+ assert return_val.get("boto3_version") == DUMMY_VERSION
+ assert return_val.get("botocore_version") == compare_version
+
+ # ========================================================
+ # Test require_boto3_at_least with reason
+ # ========================================================
+ @pytest.mark.parametrize("stdin, desired_version, compare_version, at_least", [({}, *d) for d in TEST_VERSIONS], indirect=["stdin"])
+ def test_require_boto3_at_least_with_reason(self, monkeypatch, stdin, desired_version, compare_version, at_least, capfd):
+ monkeypatch.setattr(botocore, "__version__", DUMMY_VERSION)
+ # Set boto3 version to a known value (tests are on both sides) to make
+ # sure we're comparing the right library
+ monkeypatch.setattr(boto3, "__version__", compare_version)
+
+ reason = 'testing in progress'
+
+ # Create a minimal module that we can call
+ module = AnsibleAWSModule(argument_spec=dict())
+
+ with pytest.raises(SystemExit) as e:
+ module.require_boto3_at_least(desired_version, reason=reason)
+ module.exit_json()
+
+ out, err = capfd.readouterr()
+ return_val = json.loads(out)
+
+ assert return_val.get("exception") is None
+ assert return_val.get("invocation") is not None
+ if at_least:
+ assert return_val.get("failed") is None
+ else:
+ assert return_val.get("failed")
+ # The message is generated by Ansible, don't test for an exact
+ # message
+ assert desired_version in return_val.get("msg")
+ assert " {0}".format(reason) in return_val.get("msg")
+ assert "boto3" in return_val.get("msg")
+ assert return_val.get("botocore_version") == DUMMY_VERSION
+ assert return_val.get("boto3_version") == compare_version
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_normalize_boto3_result.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_normalize_boto3_result.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_normalize_boto3_result.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_normalize_boto3_result.py 2021-11-12 18:13:53.000000000 +0000
@@ -2,7 +2,6 @@
__metaclass__ = type
import pytest
-import datetime
from dateutil import parser as date_parser
from ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_scrub_none_parameters.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_scrub_none_parameters.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_scrub_none_parameters.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/core/test_scrub_none_parameters.py 2021-11-12 18:13:53.000000000 +0000
@@ -83,6 +83,6 @@
@pytest.mark.parametrize("input_params, output_params_no_descend, output_params_descend", scrub_none_test_data)
def test_scrub_none_parameters(input_params, output_params_no_descend, output_params_descend):
- assert scrub_none_parameters(input_params) == output_params_no_descend
+ assert scrub_none_parameters(input_params) == output_params_descend
assert scrub_none_parameters(input_params, descend_into_lists=False) == output_params_no_descend
assert scrub_none_parameters(input_params, descend_into_lists=True) == output_params_descend
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/ec2/test_compare_policies.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/ec2/test_compare_policies.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/ec2/test_compare_policies.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/ec2/test_compare_policies.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,341 +0,0 @@
-# (c) 2017 Red Hat Inc.
-#
-# This file is part of Ansible
-# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
-
-from __future__ import (absolute_import, division, print_function)
-__metaclass__ = type
-
-import unittest
-
-from ansible_collections.amazon.aws.plugins.module_utils.ec2 import compare_policies
-
-
-class Ec2UtilsComparePolicies(unittest.TestCase):
-
- # ========================================================
- # Setup some initial data that we can use within our tests
- # ========================================================
- def setUp(self):
- # A pair of simple IAM Trust relationships using bools, the first a
- # native bool the second a quoted string
- self.bool_policy_bool = {
- 'Version': '2012-10-17',
- 'Statement': [
- {
- "Action": "sts:AssumeRole",
- "Condition": {
- "Bool": {"aws:MultiFactorAuthPresent": True}
- },
- "Effect": "Allow",
- "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:root"},
- "Sid": "AssumeRoleWithBoolean"
- }
- ]
- }
-
- self.bool_policy_string = {
- 'Version': '2012-10-17',
- 'Statement': [
- {
- "Action": "sts:AssumeRole",
- "Condition": {
- "Bool": {"aws:MultiFactorAuthPresent": "true"}
- },
- "Effect": "Allow",
- "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:root"},
- "Sid": "AssumeRoleWithBoolean"
- }
- ]
- }
-
- # A pair of simple bucket policies using numbers, the first a
- # native int the second a quoted string
- self.numeric_policy_number = {
- 'Version': '2012-10-17',
- 'Statement': [
- {
- "Action": "s3:ListBucket",
- "Condition": {
- "NumericLessThanEquals": {"s3:max-keys": 15}
- },
- "Effect": "Allow",
- "Resource": "arn:aws:s3:::examplebucket",
- "Sid": "s3ListBucketWithNumericLimit"
- }
- ]
- }
-
- self.numeric_policy_string = {
- 'Version': '2012-10-17',
- 'Statement': [
- {
- "Action": "s3:ListBucket",
- "Condition": {
- "NumericLessThanEquals": {"s3:max-keys": "15"}
- },
- "Effect": "Allow",
- "Resource": "arn:aws:s3:::examplebucket",
- "Sid": "s3ListBucketWithNumericLimit"
- }
- ]
- }
-
- self.small_policy_one = {
- 'Version': '2012-10-17',
- 'Statement': [
- {
- 'Action': 's3:PutObjectAcl',
- 'Sid': 'AddCannedAcl2',
- 'Resource': 'arn:aws:s3:::test_policy/*',
- 'Effect': 'Allow',
- 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
- }
- ]
- }
-
- # The same as small_policy_one, except the single resource is in a list and the contents of Statement are jumbled
- self.small_policy_two = {
- 'Version': '2012-10-17',
- 'Statement': [
- {
- 'Effect': 'Allow',
- 'Action': 's3:PutObjectAcl',
- 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']},
- 'Resource': ['arn:aws:s3:::test_policy/*'],
- 'Sid': 'AddCannedAcl2'
- }
- ]
- }
-
- self.version_policy_missing = {
- 'Statement': [
- {
- 'Action': 's3:PutObjectAcl',
- 'Sid': 'AddCannedAcl2',
- 'Resource': 'arn:aws:s3:::test_policy/*',
- 'Effect': 'Allow',
- 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
- }
- ]
- }
-
- self.version_policy_old = {
- 'Version': '2008-10-17',
- 'Statement': [
- {
- 'Action': 's3:PutObjectAcl',
- 'Sid': 'AddCannedAcl2',
- 'Resource': 'arn:aws:s3:::test_policy/*',
- 'Effect': 'Allow',
- 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
- }
- ]
- }
-
- self.version_policy_new = {
- 'Version': '2012-10-17',
- 'Statement': [
- {
- 'Action': 's3:PutObjectAcl',
- 'Sid': 'AddCannedAcl2',
- 'Resource': 'arn:aws:s3:::test_policy/*',
- 'Effect': 'Allow',
- 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
- }
- ]
- }
-
- self.larger_policy_one = {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "Test",
- "Effect": "Allow",
- "Principal": {
- "AWS": [
- "arn:aws:iam::XXXXXXXXXXXX:user/testuser1",
- "arn:aws:iam::XXXXXXXXXXXX:user/testuser2"
- ]
- },
- "Action": "s3:PutObjectAcl",
- "Resource": "arn:aws:s3:::test_policy/*"
- },
- {
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::XXXXXXXXXXXX:user/testuser2"
- },
- "Action": [
- "s3:PutObject",
- "s3:PutObjectAcl"
- ],
- "Resource": "arn:aws:s3:::test_policy/*"
- }
- ]
- }
-
- # The same as larger_policy_one, except having a list of length 1 and jumbled contents
- self.larger_policy_two = {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Principal": {
- "AWS": ["arn:aws:iam::XXXXXXXXXXXX:user/testuser2"]
- },
- "Effect": "Allow",
- "Resource": "arn:aws:s3:::test_policy/*",
- "Action": [
- "s3:PutObject",
- "s3:PutObjectAcl"
- ]
- },
- {
- "Action": "s3:PutObjectAcl",
- "Principal": {
- "AWS": [
- "arn:aws:iam::XXXXXXXXXXXX:user/testuser1",
- "arn:aws:iam::XXXXXXXXXXXX:user/testuser2"
- ]
- },
- "Sid": "Test",
- "Resource": "arn:aws:s3:::test_policy/*",
- "Effect": "Allow"
- }
- ]
- }
-
- # Different than larger_policy_two: a different principal is given
- self.larger_policy_three = {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Principal": {
- "AWS": ["arn:aws:iam::XXXXXXXXXXXX:user/testuser2"]
- },
- "Effect": "Allow",
- "Resource": "arn:aws:s3:::test_policy/*",
- "Action": [
- "s3:PutObject",
- "s3:PutObjectAcl"]
- },
- {
- "Action": "s3:PutObjectAcl",
- "Principal": {
- "AWS": [
- "arn:aws:iam::XXXXXXXXXXXX:user/testuser1",
- "arn:aws:iam::XXXXXXXXXXXX:user/testuser3"
- ]
- },
- "Sid": "Test",
- "Resource": "arn:aws:s3:::test_policy/*",
- "Effect": "Allow"
- }
- ]
- }
-
- # Minimal policy using wildcarded Principal
- self.wildcard_policy_one = {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Principal": {
- "AWS": ["*"]
- },
- "Effect": "Allow",
- "Resource": "arn:aws:s3:::test_policy/*",
- "Action": [
- "s3:PutObject",
- "s3:PutObjectAcl"]
- }
- ]
- }
-
- # Minimal policy using wildcarded Principal
- self.wildcard_policy_two = {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Principal": "*",
- "Effect": "Allow",
- "Resource": "arn:aws:s3:::test_policy/*",
- "Action": [
- "s3:PutObject",
- "s3:PutObjectAcl"]
- }
- ]
- }
-
- # ========================================================
- # ec2.compare_policies
- # ========================================================
-
- def test_compare_small_policies_without_differences(self):
- """ Testing two small policies which are identical except for:
- * The contents of the statement are in different orders
- * The second policy contains a list of length one whereas in the first it is a string
- """
- self.assertFalse(compare_policies(self.small_policy_one, self.small_policy_two))
-
- def test_compare_large_policies_without_differences(self):
- """ Testing two larger policies which are identical except for:
- * The statements are in different orders
- * The contents of the statements are also in different orders
- * The second contains a list of length one for the Principal whereas in the first it is a string
- """
- self.assertFalse(compare_policies(self.larger_policy_one, self.larger_policy_two))
-
- def test_compare_larger_policies_with_difference(self):
- """ Testing two larger policies which are identical except for:
- * one different principal
- """
- self.assertTrue(compare_policies(self.larger_policy_two, self.larger_policy_three))
-
- def test_compare_smaller_policy_with_larger(self):
- """ Testing two policies of different sizes """
- self.assertTrue(compare_policies(self.larger_policy_one, self.small_policy_one))
-
- def test_compare_boolean_policy_bool_and_string_are_equal(self):
- """ Testing two policies one using a quoted boolean, the other a bool """
- self.assertFalse(compare_policies(self.bool_policy_string, self.bool_policy_bool))
-
- def test_compare_numeric_policy_number_and_string_are_equal(self):
- """ Testing two policies one using a quoted number, the other an int """
- self.assertFalse(compare_policies(self.numeric_policy_string, self.numeric_policy_number))
-
- def test_compare_version_policies_defaults_old(self):
- """ Testing that a policy without Version is considered identical to one
- with the 'old' Version (by default)
- """
- self.assertFalse(compare_policies(self.version_policy_old, self.version_policy_missing))
- self.assertTrue(compare_policies(self.version_policy_new, self.version_policy_missing))
-
- def test_compare_version_policies_default_disabled(self):
- """ Testing that a policy without Version not considered identical when default_version=None
- """
- self.assertFalse(compare_policies(self.version_policy_missing, self.version_policy_missing, default_version=None))
- self.assertTrue(compare_policies(self.version_policy_old, self.version_policy_missing, default_version=None))
- self.assertTrue(compare_policies(self.version_policy_new, self.version_policy_missing, default_version=None))
-
- def test_compare_version_policies_default_set(self):
- """ Testing that a policy without Version is only considered identical
- when default_version="2008-10-17"
- """
- self.assertFalse(compare_policies(self.version_policy_missing, self.version_policy_missing, default_version="2012-10-17"))
- self.assertTrue(compare_policies(self.version_policy_old, self.version_policy_missing, default_version="2012-10-17"))
- self.assertFalse(compare_policies(self.version_policy_old, self.version_policy_missing, default_version="2008-10-17"))
- self.assertFalse(compare_policies(self.version_policy_new, self.version_policy_missing, default_version="2012-10-17"))
- self.assertTrue(compare_policies(self.version_policy_new, self.version_policy_missing, default_version="2008-10-17"))
-
- def test_compare_version_policies_with_none(self):
- """ Testing that comparing with no policy works
- """
- self.assertTrue(compare_policies(self.small_policy_one, None))
- self.assertTrue(compare_policies(None, self.small_policy_one))
- self.assertFalse(compare_policies(None, None))
-
- def test_compare_wildcard_policies_without_differences(self):
- """ Testing two small wildcard policies which are identical except for:
- * Principal: "*" vs Principal: ["AWS": "*"]
- """
- self.assertFalse(compare_policies(self.wildcard_policy_one, self.wildcard_policy_two))
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/policy/test_compare_policies.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/policy/test_compare_policies.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/policy/test_compare_policies.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/policy/test_compare_policies.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,341 @@
+# (c) 2017 Red Hat Inc.
+#
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+import unittest
+
+from ansible_collections.amazon.aws.plugins.module_utils.policy import compare_policies
+
+
+class PolicyUtils(unittest.TestCase):
+
+ # ========================================================
+ # Setup some initial data that we can use within our tests
+ # ========================================================
+ def setUp(self):
+ # A pair of simple IAM Trust relationships using bools, the first a
+ # native bool the second a quoted string
+ self.bool_policy_bool = {
+ 'Version': '2012-10-17',
+ 'Statement': [
+ {
+ "Action": "sts:AssumeRole",
+ "Condition": {
+ "Bool": {"aws:MultiFactorAuthPresent": True}
+ },
+ "Effect": "Allow",
+ "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:root"},
+ "Sid": "AssumeRoleWithBoolean"
+ }
+ ]
+ }
+
+ self.bool_policy_string = {
+ 'Version': '2012-10-17',
+ 'Statement': [
+ {
+ "Action": "sts:AssumeRole",
+ "Condition": {
+ "Bool": {"aws:MultiFactorAuthPresent": "true"}
+ },
+ "Effect": "Allow",
+ "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:root"},
+ "Sid": "AssumeRoleWithBoolean"
+ }
+ ]
+ }
+
+ # A pair of simple bucket policies using numbers, the first a
+ # native int the second a quoted string
+ self.numeric_policy_number = {
+ 'Version': '2012-10-17',
+ 'Statement': [
+ {
+ "Action": "s3:ListBucket",
+ "Condition": {
+ "NumericLessThanEquals": {"s3:max-keys": 15}
+ },
+ "Effect": "Allow",
+ "Resource": "arn:aws:s3:::examplebucket",
+ "Sid": "s3ListBucketWithNumericLimit"
+ }
+ ]
+ }
+
+ self.numeric_policy_string = {
+ 'Version': '2012-10-17',
+ 'Statement': [
+ {
+ "Action": "s3:ListBucket",
+ "Condition": {
+ "NumericLessThanEquals": {"s3:max-keys": "15"}
+ },
+ "Effect": "Allow",
+ "Resource": "arn:aws:s3:::examplebucket",
+ "Sid": "s3ListBucketWithNumericLimit"
+ }
+ ]
+ }
+
+ self.small_policy_one = {
+ 'Version': '2012-10-17',
+ 'Statement': [
+ {
+ 'Action': 's3:PutObjectAcl',
+ 'Sid': 'AddCannedAcl2',
+ 'Resource': 'arn:aws:s3:::test_policy/*',
+ 'Effect': 'Allow',
+ 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
+ }
+ ]
+ }
+
+ # The same as small_policy_one, except the single resource is in a list and the contents of Statement are jumbled
+ self.small_policy_two = {
+ 'Version': '2012-10-17',
+ 'Statement': [
+ {
+ 'Effect': 'Allow',
+ 'Action': 's3:PutObjectAcl',
+ 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']},
+ 'Resource': ['arn:aws:s3:::test_policy/*'],
+ 'Sid': 'AddCannedAcl2'
+ }
+ ]
+ }
+
+ self.version_policy_missing = {
+ 'Statement': [
+ {
+ 'Action': 's3:PutObjectAcl',
+ 'Sid': 'AddCannedAcl2',
+ 'Resource': 'arn:aws:s3:::test_policy/*',
+ 'Effect': 'Allow',
+ 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
+ }
+ ]
+ }
+
+ self.version_policy_old = {
+ 'Version': '2008-10-17',
+ 'Statement': [
+ {
+ 'Action': 's3:PutObjectAcl',
+ 'Sid': 'AddCannedAcl2',
+ 'Resource': 'arn:aws:s3:::test_policy/*',
+ 'Effect': 'Allow',
+ 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
+ }
+ ]
+ }
+
+ self.version_policy_new = {
+ 'Version': '2012-10-17',
+ 'Statement': [
+ {
+ 'Action': 's3:PutObjectAcl',
+ 'Sid': 'AddCannedAcl2',
+ 'Resource': 'arn:aws:s3:::test_policy/*',
+ 'Effect': 'Allow',
+ 'Principal': {'AWS': ['arn:aws:iam::XXXXXXXXXXXX:user/username1', 'arn:aws:iam::XXXXXXXXXXXX:user/username2']}
+ }
+ ]
+ }
+
+ self.larger_policy_one = {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "Test",
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": [
+ "arn:aws:iam::XXXXXXXXXXXX:user/testuser1",
+ "arn:aws:iam::XXXXXXXXXXXX:user/testuser2"
+ ]
+ },
+ "Action": "s3:PutObjectAcl",
+ "Resource": "arn:aws:s3:::test_policy/*"
+ },
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::XXXXXXXXXXXX:user/testuser2"
+ },
+ "Action": [
+ "s3:PutObject",
+ "s3:PutObjectAcl"
+ ],
+ "Resource": "arn:aws:s3:::test_policy/*"
+ }
+ ]
+ }
+
+ # The same as larger_policy_one, except having a list of length 1 and jumbled contents
+ self.larger_policy_two = {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Principal": {
+ "AWS": ["arn:aws:iam::XXXXXXXXXXXX:user/testuser2"]
+ },
+ "Effect": "Allow",
+ "Resource": "arn:aws:s3:::test_policy/*",
+ "Action": [
+ "s3:PutObject",
+ "s3:PutObjectAcl"
+ ]
+ },
+ {
+ "Action": "s3:PutObjectAcl",
+ "Principal": {
+ "AWS": [
+ "arn:aws:iam::XXXXXXXXXXXX:user/testuser1",
+ "arn:aws:iam::XXXXXXXXXXXX:user/testuser2"
+ ]
+ },
+ "Sid": "Test",
+ "Resource": "arn:aws:s3:::test_policy/*",
+ "Effect": "Allow"
+ }
+ ]
+ }
+
+ # Different than larger_policy_two: a different principal is given
+ self.larger_policy_three = {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Principal": {
+ "AWS": ["arn:aws:iam::XXXXXXXXXXXX:user/testuser2"]
+ },
+ "Effect": "Allow",
+ "Resource": "arn:aws:s3:::test_policy/*",
+ "Action": [
+ "s3:PutObject",
+ "s3:PutObjectAcl"]
+ },
+ {
+ "Action": "s3:PutObjectAcl",
+ "Principal": {
+ "AWS": [
+ "arn:aws:iam::XXXXXXXXXXXX:user/testuser1",
+ "arn:aws:iam::XXXXXXXXXXXX:user/testuser3"
+ ]
+ },
+ "Sid": "Test",
+ "Resource": "arn:aws:s3:::test_policy/*",
+ "Effect": "Allow"
+ }
+ ]
+ }
+
+ # Minimal policy using wildcarded Principal
+ self.wildcard_policy_one = {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Principal": {
+ "AWS": ["*"]
+ },
+ "Effect": "Allow",
+ "Resource": "arn:aws:s3:::test_policy/*",
+ "Action": [
+ "s3:PutObject",
+ "s3:PutObjectAcl"]
+ }
+ ]
+ }
+
+ # Minimal policy using wildcarded Principal
+ self.wildcard_policy_two = {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Principal": "*",
+ "Effect": "Allow",
+ "Resource": "arn:aws:s3:::test_policy/*",
+ "Action": [
+ "s3:PutObject",
+ "s3:PutObjectAcl"]
+ }
+ ]
+ }
+
+ # ========================================================
+ # ec2.compare_policies
+ # ========================================================
+
+ def test_compare_small_policies_without_differences(self):
+ """ Testing two small policies which are identical except for:
+ * The contents of the statement are in different orders
+ * The second policy contains a list of length one whereas in the first it is a string
+ """
+ self.assertFalse(compare_policies(self.small_policy_one, self.small_policy_two))
+
+ def test_compare_large_policies_without_differences(self):
+ """ Testing two larger policies which are identical except for:
+ * The statements are in different orders
+ * The contents of the statements are also in different orders
+ * The second contains a list of length one for the Principal whereas in the first it is a string
+ """
+ self.assertFalse(compare_policies(self.larger_policy_one, self.larger_policy_two))
+
+ def test_compare_larger_policies_with_difference(self):
+ """ Testing two larger policies which are identical except for:
+ * one different principal
+ """
+ self.assertTrue(compare_policies(self.larger_policy_two, self.larger_policy_three))
+
+ def test_compare_smaller_policy_with_larger(self):
+ """ Testing two policies of different sizes """
+ self.assertTrue(compare_policies(self.larger_policy_one, self.small_policy_one))
+
+ def test_compare_boolean_policy_bool_and_string_are_equal(self):
+ """ Testing two policies one using a quoted boolean, the other a bool """
+ self.assertFalse(compare_policies(self.bool_policy_string, self.bool_policy_bool))
+
+ def test_compare_numeric_policy_number_and_string_are_equal(self):
+ """ Testing two policies one using a quoted number, the other an int """
+ self.assertFalse(compare_policies(self.numeric_policy_string, self.numeric_policy_number))
+
+ def test_compare_version_policies_defaults_old(self):
+ """ Testing that a policy without Version is considered identical to one
+ with the 'old' Version (by default)
+ """
+ self.assertFalse(compare_policies(self.version_policy_old, self.version_policy_missing))
+ self.assertTrue(compare_policies(self.version_policy_new, self.version_policy_missing))
+
+ def test_compare_version_policies_default_disabled(self):
+ """ Testing that a policy without Version not considered identical when default_version=None
+ """
+ self.assertFalse(compare_policies(self.version_policy_missing, self.version_policy_missing, default_version=None))
+ self.assertTrue(compare_policies(self.version_policy_old, self.version_policy_missing, default_version=None))
+ self.assertTrue(compare_policies(self.version_policy_new, self.version_policy_missing, default_version=None))
+
+ def test_compare_version_policies_default_set(self):
+ """ Testing that a policy without Version is only considered identical
+ when default_version="2008-10-17"
+ """
+ self.assertFalse(compare_policies(self.version_policy_missing, self.version_policy_missing, default_version="2012-10-17"))
+ self.assertTrue(compare_policies(self.version_policy_old, self.version_policy_missing, default_version="2012-10-17"))
+ self.assertFalse(compare_policies(self.version_policy_old, self.version_policy_missing, default_version="2008-10-17"))
+ self.assertFalse(compare_policies(self.version_policy_new, self.version_policy_missing, default_version="2012-10-17"))
+ self.assertTrue(compare_policies(self.version_policy_new, self.version_policy_missing, default_version="2008-10-17"))
+
+ def test_compare_version_policies_with_none(self):
+ """ Testing that comparing with no policy works
+ """
+ self.assertTrue(compare_policies(self.small_policy_one, None))
+ self.assertTrue(compare_policies(None, self.small_policy_one))
+ self.assertFalse(compare_policies(None, None))
+
+ def test_compare_wildcard_policies_without_differences(self):
+ """ Testing two small wildcard policies which are identical except for:
+ * Principal: "*" vs Principal: ["AWS": "*"]
+ """
+ self.assertFalse(compare_policies(self.wildcard_policy_one, self.wildcard_policy_two))
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_cloud.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_cloud.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_cloud.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_cloud.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,263 @@
+# (c) 2021 Red Hat Inc.
+#
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+from ansible_collections.amazon.aws.plugins.module_utils.cloud import CloudRetry, BackoffIterator
+import unittest
+import random
+from datetime import datetime
+
+
+def test_backoff_value_generator():
+ max_delay = 60
+ initial = 3
+ backoff = 2
+
+ min_sleep = initial
+ counter = 0
+ for sleep in BackoffIterator(delay=initial, backoff=backoff, max_delay=max_delay):
+ if counter > 4:
+ assert sleep == max_delay
+ else:
+ assert sleep == min_sleep
+ min_sleep *= backoff
+ counter += 1
+ if counter == 10:
+ break
+
+
+def test_backoff_value_generator_with_jitter():
+ max_delay = 60
+ initial = 3
+ backoff = 2
+
+ min_sleep = initial
+ counter = 0
+ for sleep in BackoffIterator(delay=initial, backoff=backoff, max_delay=max_delay, jitter=True):
+ if counter > 4:
+ assert sleep <= max_delay
+ else:
+ assert sleep <= min_sleep
+ min_sleep *= backoff
+ counter += 1
+ if counter == 10:
+ break
+
+
+class CloudRetryUtils(unittest.TestCase):
+
+ error_codes = [400, 500, 600]
+ custom_error_codes = [100, 200, 300]
+
+ class TestException(Exception):
+ """
+ custom exception class for testing
+ """
+ def __init__(self, status):
+ self.status = status
+
+ def __str__(self):
+ return "TestException with status: {0}".format(self.status)
+
+ class UnitTestsRetry(CloudRetry):
+ base_class = Exception
+
+ @staticmethod
+ def status_code_from_exception(error):
+ return getattr(error, "status") if hasattr(error, "status") else None
+
+ class CustomRetry(CloudRetry):
+ base_class = Exception
+
+ @staticmethod
+ def status_code_from_exception(error):
+ return error.status['response']['status']
+
+ @staticmethod
+ def found(response_code, catch_extra_error_codes=None):
+ if catch_extra_error_codes:
+ return response_code in catch_extra_error_codes + CloudRetryUtils.custom_error_codes
+ else:
+ return response_code in CloudRetryUtils.custom_error_codes
+
+ class KeyRetry(CloudRetry):
+ base_class = KeyError
+
+ @staticmethod
+ def status_code_from_exception(error):
+ return True
+
+ @staticmethod
+ def found(response_code, catch_extra_error_codes=None):
+ return True
+
+ class KeyAndIndexRetry(CloudRetry):
+ base_class = (KeyError, IndexError)
+
+ @staticmethod
+ def status_code_from_exception(error):
+ return True
+
+ @staticmethod
+ def found(response_code, catch_extra_error_codes=None):
+ return True
+
+ # ========================================================
+ # Setup some initial data that we can use within our tests
+ # ========================================================
+ def setUp(self):
+ # nothing to do on setup stage
+ pass
+
+ # ========================================================
+ # retry exponential backoff
+ # ========================================================
+ def test_retry_exponential_backoff(self):
+
+ @CloudRetryUtils.UnitTestsRetry.exponential_backoff(retries=3, delay=1, backoff=1.1, max_delay=3, catch_extra_error_codes=CloudRetryUtils.error_codes)
+ def test_retry_func():
+ if test_retry_func.counter < 2:
+ test_retry_func.counter += 1
+ raise self.TestException(status=random.choice(CloudRetryUtils.error_codes))
+ else:
+ return True
+
+ test_retry_func.counter = 0
+ ret = test_retry_func()
+ assert ret is True
+
+ def test_retry_exponential_backoff_with_unexpected_exception(self):
+ unexpected_except = self.TestException(status=100)
+
+ @CloudRetryUtils.UnitTestsRetry.exponential_backoff(retries=3, delay=1, backoff=1.1, max_delay=3, catch_extra_error_codes=CloudRetryUtils.error_codes)
+ def test_retry_func():
+ if test_retry_func.counter == 0:
+ test_retry_func.counter += 1
+ raise self.TestException(status=random.choice(CloudRetryUtils.error_codes))
+ else:
+ raise unexpected_except
+
+ test_retry_func.counter = 0
+ try:
+ ret = test_retry_func()
+ except self.TestException as exc:
+ assert exc.status == unexpected_except.status
+
+ # ========================================================
+ # retry jittered backoff
+ # ========================================================
+ def test_retry_jitter_backoff(self):
+ @CloudRetryUtils.UnitTestsRetry.jittered_backoff(retries=3, delay=1, max_delay=3, catch_extra_error_codes=CloudRetryUtils.error_codes)
+ def test_retry_func():
+ if test_retry_func.counter < 2:
+ test_retry_func.counter += 1
+ raise self.TestException(status=random.choice(CloudRetryUtils.error_codes))
+ else:
+ return True
+
+ test_retry_func.counter = 0
+ ret = test_retry_func()
+ assert ret is True
+
+ def test_retry_jittered_backoff_with_unexpected_exception(self):
+ unexpected_except = self.TestException(status=100)
+
+ @CloudRetryUtils.UnitTestsRetry.jittered_backoff(retries=3, delay=1, max_delay=3, catch_extra_error_codes=CloudRetryUtils.error_codes)
+ def test_retry_func():
+ if test_retry_func.counter == 0:
+ test_retry_func.counter += 1
+ raise self.TestException(status=random.choice(CloudRetryUtils.error_codes))
+ else:
+ raise unexpected_except
+
+ test_retry_func.counter = 0
+ try:
+ ret = test_retry_func()
+ except self.TestException as exc:
+ assert exc.status == unexpected_except.status
+
+ # ========================================================
+ # retry with custom class
+ # ========================================================
+ def test_retry_exponential_backoff_custom_class(self):
+ def build_response():
+ return dict(response=dict(status=random.choice(CloudRetryUtils.custom_error_codes)))
+
+ @self.CustomRetry.exponential_backoff(retries=3, delay=1, backoff=1.1, max_delay=3, catch_extra_error_codes=CloudRetryUtils.error_codes)
+ def test_retry_func():
+ if test_retry_func.counter < 2:
+ test_retry_func.counter += 1
+ raise self.TestException(build_response())
+ else:
+ return True
+
+ test_retry_func.counter = 0
+
+ ret = test_retry_func()
+ assert ret is True
+
+ # =============================================================
+ # Test wrapped function multiple times will restart the sleep
+ # =============================================================
+ def test_wrapped_function_called_several_times(self):
+ @CloudRetryUtils.UnitTestsRetry.exponential_backoff(retries=2, delay=2, backoff=4, max_delay=100, catch_extra_error_codes=CloudRetryUtils.error_codes)
+ def _fail():
+ raise self.TestException(status=random.choice(CloudRetryUtils.error_codes))
+
+ # run the method 3 times and assert that each it is retrying after 2secs
+ # the elapsed execution time should be closed to 2sec
+ for u in range(3):
+ start = datetime.now()
+ raised = False
+ try:
+ _fail()
+ except self.TestException:
+ raised = True
+ duration = (datetime.now() - start).seconds
+ assert duration == 2
+ finally:
+ assert raised
+
+ def test_only_base_exception(self):
+ def _fail_index():
+ my_list = list()
+ return my_list[5]
+
+ def _fail_key():
+ my_dict = dict()
+ return my_dict['invalid_key']
+
+ def _fail_exception():
+ raise Exception('bang')
+
+ key_retry_decorator = CloudRetryUtils.KeyRetry.exponential_backoff(retries=2, delay=2, backoff=4, max_delay=100)
+ key_and_index_retry_decorator = CloudRetryUtils.KeyAndIndexRetry.exponential_backoff(retries=2, delay=2, backoff=4, max_delay=100)
+
+ expectations = [
+ [key_retry_decorator, _fail_exception, 0],
+ [key_retry_decorator, _fail_index, 0],
+ [key_retry_decorator, _fail_key, 2],
+ [key_and_index_retry_decorator, _fail_exception, 0],
+ [key_and_index_retry_decorator, _fail_index, 2],
+ [key_and_index_retry_decorator, _fail_key, 2],
+ ]
+
+ for expection in expectations:
+ decorator = expection[0]
+ function = expection[1]
+ duration = expection[2]
+
+ start = datetime.now()
+ raised = False
+ try:
+ decorator(function)()
+ except Exception:
+ raised = True
+ _duration = (datetime.now() - start).seconds
+ assert duration == _duration
+ finally:
+ assert raised
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_ec2.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_ec2.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_ec2.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_ec2.py 2021-11-12 18:13:53.000000000 +0000
@@ -9,9 +9,6 @@
import unittest
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_filter_list
-from ansible_collections.amazon.aws.plugins.module_utils.ec2 import ansible_dict_to_boto3_tag_list
-from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto3_tag_list_to_ansible_dict
-from ansible_collections.amazon.aws.plugins.module_utils.ec2 import compare_aws_tags
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import map_complex_type
@@ -21,24 +18,12 @@
# Setup some initial data that we can use within our tests
# ========================================================
def setUp(self):
-
- self.tag_example_boto3_list = [
- {'Key': 'lowerCamel', 'Value': 'lowerCamelValue'},
- {'Key': 'UpperCamel', 'Value': 'upperCamelValue'},
- {'Key': 'Normal case', 'Value': 'Normal Value'},
- {'Key': 'lower case', 'Value': 'lower case value'}
- ]
-
- self.tag_example_dict = {
- 'lowerCamel': 'lowerCamelValue',
- 'UpperCamel': 'upperCamelValue',
- 'Normal case': 'Normal Value',
- 'lower case': 'lower case value'
- }
+ pass
# ========================================================
# ec2.map_complex_type
# ========================================================
+
def test_map_complex_type_over_dict(self):
complex_type = {'minimum_healthy_percent': "75", 'maximum_percent': "150"}
type_map = {'minimum_healthy_percent': 'int', 'maximum_percent': 'int'}
@@ -91,101 +76,3 @@
converted_filters_int = ansible_dict_to_boto3_filter_list(filters)
self.assertEqual(converted_filters_int, filter_list_integer)
-
- # ========================================================
- # ec2.ansible_dict_to_boto3_tag_list
- # ========================================================
-
- def test_ansible_dict_to_boto3_tag_list(self):
- converted_list = ansible_dict_to_boto3_tag_list(self.tag_example_dict)
- sorted_converted_list = sorted(converted_list, key=lambda i: (i['Key']))
- sorted_list = sorted(self.tag_example_boto3_list, key=lambda i: (i['Key']))
- self.assertEqual(sorted_converted_list, sorted_list)
-
- # ========================================================
- # ec2.boto3_tag_list_to_ansible_dict
- # ========================================================
-
- def test_boto3_tag_list_to_ansible_dict(self):
- converted_dict = boto3_tag_list_to_ansible_dict(self.tag_example_boto3_list)
- self.assertEqual(converted_dict, self.tag_example_dict)
-
- def test_boto3_tag_list_to_ansible_dict_empty(self):
- # AWS returns [] when there are no tags
- self.assertEqual(boto3_tag_list_to_ansible_dict([]), {})
- # Minio returns [{}] when there are no tags
- self.assertEqual(boto3_tag_list_to_ansible_dict([{}]), {})
-
- # ========================================================
- # ec2.compare_aws_tags
- # ========================================================
-
- def test_compare_aws_tags_equal(self):
- new_dict = dict(self.tag_example_dict)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
- self.assertEqual({}, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
- self.assertEqual({}, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
- self.assertEqual({}, keys_to_set)
- self.assertEqual([], keys_to_unset)
-
- def test_compare_aws_tags_removed(self):
- new_dict = dict(self.tag_example_dict)
- del new_dict['lowerCamel']
- del new_dict['Normal case']
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
- self.assertEqual({}, keys_to_set)
- self.assertEqual(set(['lowerCamel', 'Normal case']), set(keys_to_unset))
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
- self.assertEqual({}, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
- self.assertEqual({}, keys_to_set)
- self.assertEqual(set(['lowerCamel', 'Normal case']), set(keys_to_unset))
-
- def test_compare_aws_tags_added(self):
- new_dict = dict(self.tag_example_dict)
- new_keys = {'add_me': 'lower case', 'Me too!': 'Contributing'}
- new_dict.update(new_keys)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual([], keys_to_unset)
-
- def test_compare_aws_tags_changed(self):
- new_dict = dict(self.tag_example_dict)
- new_keys = {'UpperCamel': 'anotherCamelValue', 'Normal case': 'normal value'}
- new_dict.update(new_keys)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual([], keys_to_unset)
-
- def test_compare_aws_tags_complex_update(self):
- # Adds 'Me too!', Changes 'UpperCamel' and removes 'Normal case'
- new_dict = dict(self.tag_example_dict)
- new_keys = {'UpperCamel': 'anotherCamelValue', 'Me too!': 'Contributing'}
- new_dict.update(new_keys)
- del new_dict['Normal case']
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual(['Normal case'], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual([], keys_to_unset)
- keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
- self.assertEqual(new_keys, keys_to_set)
- self.assertEqual(['Normal case'], keys_to_unset)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_s3.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_s3.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_s3.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_s3.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,40 @@
+#
+# (c) 2021 Red Hat Inc.
+#
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+from ansible_collections.amazon.aws.tests.unit.compat.mock import MagicMock
+from ansible_collections.amazon.aws.plugins.module_utils import s3
+
+
+def test_validate_bucket_name():
+ module = MagicMock()
+
+ assert s3.validate_bucket_name(module, "docexamplebucket1") is True
+ assert not module.fail_json.called
+ assert s3.validate_bucket_name(module, "log-delivery-march-2020") is True
+ assert not module.fail_json.called
+ assert s3.validate_bucket_name(module, "my-hosted-content") is True
+ assert not module.fail_json.called
+
+ assert s3.validate_bucket_name(module, "docexamplewebsite.com") is True
+ assert not module.fail_json.called
+ assert s3.validate_bucket_name(module, "www.docexamplewebsite.com") is True
+ assert not module.fail_json.called
+ assert s3.validate_bucket_name(module, "my.example.s3.bucket") is True
+ assert not module.fail_json.called
+
+ module.fail_json.reset_mock()
+ s3.validate_bucket_name(module, "doc_example_bucket")
+ assert module.fail_json.called
+
+ module.fail_json.reset_mock()
+ s3.validate_bucket_name(module, "DocExampleBucket")
+ assert module.fail_json.called
+ module.fail_json.reset_mock()
+ s3.validate_bucket_name(module, "doc-example-bucket-")
+ assert module.fail_json.called
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_tagging.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_tagging.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_tagging.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/module_utils/test_tagging.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,171 @@
+# (c) 2017 Red Hat Inc.
+#
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+import unittest
+
+from ansible_collections.amazon.aws.plugins.module_utils.tagging import ansible_dict_to_boto3_tag_list
+from ansible_collections.amazon.aws.plugins.module_utils.tagging import boto3_tag_list_to_ansible_dict
+from ansible_collections.amazon.aws.plugins.module_utils.tagging import boto3_tag_specifications
+from ansible_collections.amazon.aws.plugins.module_utils.tagging import compare_aws_tags
+
+
+class Ec2Utils(unittest.TestCase):
+
+ # ========================================================
+ # Setup some initial data that we can use within our tests
+ # ========================================================
+ def setUp(self):
+
+ self.tag_example_boto3_list = [
+ {'Key': 'lowerCamel', 'Value': 'lowerCamelValue'},
+ {'Key': 'UpperCamel', 'Value': 'upperCamelValue'},
+ {'Key': 'Normal case', 'Value': 'Normal Value'},
+ {'Key': 'lower case', 'Value': 'lower case value'}
+ ]
+
+ self.tag_example_dict = {
+ 'lowerCamel': 'lowerCamelValue',
+ 'UpperCamel': 'upperCamelValue',
+ 'Normal case': 'Normal Value',
+ 'lower case': 'lower case value'
+ }
+
+ self.tag_minimal_boto3_list = [
+ {'Key': 'mykey', 'Value': 'myvalue'},
+ ]
+
+ self.tag_minimal_dict = {'mykey': 'myvalue'}
+
+ # ========================================================
+ # tagging.ansible_dict_to_boto3_tag_list
+ # ========================================================
+
+ def test_ansible_dict_to_boto3_tag_list(self):
+ converted_list = ansible_dict_to_boto3_tag_list(self.tag_example_dict)
+ sorted_converted_list = sorted(converted_list, key=lambda i: (i['Key']))
+ sorted_list = sorted(self.tag_example_boto3_list, key=lambda i: (i['Key']))
+ self.assertEqual(sorted_converted_list, sorted_list)
+
+ # ========================================================
+ # tagging.boto3_tag_list_to_ansible_dict
+ # ========================================================
+
+ def test_boto3_tag_list_to_ansible_dict(self):
+ converted_dict = boto3_tag_list_to_ansible_dict(self.tag_example_boto3_list)
+ self.assertEqual(converted_dict, self.tag_example_dict)
+
+ def test_boto3_tag_list_to_ansible_dict_empty(self):
+ # AWS returns [] when there are no tags
+ self.assertEqual(boto3_tag_list_to_ansible_dict([]), {})
+ # Minio returns [{}] when there are no tags
+ self.assertEqual(boto3_tag_list_to_ansible_dict([{}]), {})
+
+ # ========================================================
+ # tagging.compare_aws_tags
+ # ========================================================
+
+ def test_compare_aws_tags_equal(self):
+ new_dict = dict(self.tag_example_dict)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
+ self.assertEqual({}, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
+ self.assertEqual({}, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
+ self.assertEqual({}, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+
+ def test_compare_aws_tags_removed(self):
+ new_dict = dict(self.tag_example_dict)
+ del new_dict['lowerCamel']
+ del new_dict['Normal case']
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
+ self.assertEqual({}, keys_to_set)
+ self.assertEqual(set(['lowerCamel', 'Normal case']), set(keys_to_unset))
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
+ self.assertEqual({}, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
+ self.assertEqual({}, keys_to_set)
+ self.assertEqual(set(['lowerCamel', 'Normal case']), set(keys_to_unset))
+
+ def test_compare_aws_tags_added(self):
+ new_dict = dict(self.tag_example_dict)
+ new_keys = {'add_me': 'lower case', 'Me too!': 'Contributing'}
+ new_dict.update(new_keys)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+
+ def test_compare_aws_tags_changed(self):
+ new_dict = dict(self.tag_example_dict)
+ new_keys = {'UpperCamel': 'anotherCamelValue', 'Normal case': 'normal value'}
+ new_dict.update(new_keys)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+
+ def test_compare_aws_tags_complex_update(self):
+ # Adds 'Me too!', Changes 'UpperCamel' and removes 'Normal case'
+ new_dict = dict(self.tag_example_dict)
+ new_keys = {'UpperCamel': 'anotherCamelValue', 'Me too!': 'Contributing'}
+ new_dict.update(new_keys)
+ del new_dict['Normal case']
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual(['Normal case'], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=False)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual([], keys_to_unset)
+ keys_to_set, keys_to_unset = compare_aws_tags(self.tag_example_dict, new_dict, purge_tags=True)
+ self.assertEqual(new_keys, keys_to_set)
+ self.assertEqual(['Normal case'], keys_to_unset)
+
+ # ========================================================
+ # tagging.boto3_tag_specifications
+ # ========================================================
+
+ # Builds upon ansible_dict_to_boto3_tag_list, assume that if a minimal tag
+ # dictionary behaves as expected, then all will behave
+ def test_boto3_tag_specifications_no_type(self):
+ tag_specification = boto3_tag_specifications(self.tag_minimal_dict)
+ expected_specification = [{'Tags': self.tag_minimal_boto3_list}]
+ self.assertEqual(tag_specification, expected_specification)
+
+ def test_boto3_tag_specifications_string_type(self):
+ tag_specification = boto3_tag_specifications(self.tag_minimal_dict, 'instance')
+ expected_specification = [{'ResourceType': 'instance', 'Tags': self.tag_minimal_boto3_list}]
+ self.assertEqual(tag_specification, expected_specification)
+
+ def test_boto3_tag_specifications_single_type(self):
+ tag_specification = boto3_tag_specifications(self.tag_minimal_dict, ['instance'])
+ expected_specification = [{'ResourceType': 'instance', 'Tags': self.tag_minimal_boto3_list}]
+ self.assertEqual(tag_specification, expected_specification)
+
+ def test_boto3_tag_specifications_multipe_types(self):
+ tag_specification = boto3_tag_specifications(self.tag_minimal_dict, ['instance', 'volume'])
+ expected_specification = [
+ {'ResourceType': 'instance', 'Tags': self.tag_minimal_boto3_list},
+ {'ResourceType': 'volume', 'Tags': self.tag_minimal_boto3_list},
+ ]
+ sorted_tag_spec = sorted(tag_specification, key=lambda i: (i['ResourceType']))
+ sorted_expected = sorted(expected_specification, key=lambda i: (i['ResourceType']))
+ self.assertEqual(sorted_tag_spec, sorted_expected)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/inventory/test_aws_ec2.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/inventory/test_aws_ec2.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/inventory/test_aws_ec2.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/inventory/test_aws_ec2.py 2021-11-12 18:13:53.000000000 +0000
@@ -192,3 +192,31 @@
def test_verify_file_bad_config(inventory):
assert inventory.verify_file('not_aws_config.yml') is False
+
+
+def test_include_filters_with_no_filter(inventory):
+ inventory._options = {
+ 'filters': {},
+ 'include_filters': [],
+ }
+ print(inventory.build_include_filters())
+ assert inventory.build_include_filters() == [{}]
+
+
+def test_include_filters_with_include_filters_only(inventory):
+ inventory._options = {
+ 'filters': {},
+ 'include_filters': [{"foo": "bar"}],
+ }
+ assert inventory.build_include_filters() == [{"foo": "bar"}]
+
+
+def test_include_filters_with_filter_and_include_filters(inventory):
+ inventory._options = {
+ 'filters': {"from_filter": 1},
+ 'include_filters': [{"from_include_filter": "bar"}],
+ }
+ print(inventory.build_include_filters())
+ assert inventory.build_include_filters() == [
+ {"from_filter": 1},
+ {"from_include_filter": "bar"}]
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/lookup/test_aws_ssm.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/lookup/test_aws_ssm.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/lookup/test_aws_ssm.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/lookup/test_aws_ssm.py 2021-11-12 18:13:53.000000000 +0000
@@ -34,15 +34,12 @@
pytestmark = pytest.mark.skip("This test requires the boto3 and botocore Python libraries")
simple_variable_success_response = {
- 'Parameters': [
- {
- 'Name': 'simple_variable',
- 'Type': 'String',
- 'Value': 'simplevalue',
- 'Version': 1
- }
- ],
- 'InvalidParameters': [],
+ 'Parameter': {
+ 'Name': 'simple_variable',
+ 'Type': 'String',
+ 'Value': 'simplevalue',
+ 'Version': 1
+ },
'ResponseMetadata': {
'RequestId': '12121212-3434-5656-7878-9a9a9a9a9a9a',
'HTTPStatusCode': 200,
@@ -62,17 +59,21 @@
{'Name': '/testpath/won', 'Type': 'String', 'Value': 'simple_value_won', 'Version': 1}
]
-missing_variable_response = copy(simple_variable_success_response)
-missing_variable_response['Parameters'] = []
-missing_variable_response['InvalidParameters'] = ['missing_variable']
-
-some_missing_variable_response = copy(simple_variable_success_response)
-some_missing_variable_response['Parameters'] = [
- {'Name': 'simple', 'Type': 'String', 'Value': 'simple_value', 'Version': 1},
- {'Name': '/testpath/won', 'Type': 'String', 'Value': 'simple_value_won', 'Version': 1}
-]
-some_missing_variable_response['InvalidParameters'] = ['missing_variable']
+simple_response = copy(simple_variable_success_response)
+simple_response['Parameter'] = {
+ 'Name': 'simple',
+ 'Type': 'String',
+ 'Value': 'simple_value',
+ 'Version': 1
+}
+simple_won_response = copy(simple_variable_success_response)
+simple_won_response['Parameter'] = {
+ 'Name': '/testpath/won',
+ 'Type': 'String',
+ 'Value': 'simple_value_won',
+ 'Version': 1
+}
dummy_credentials = {}
dummy_credentials['boto_profile'] = None
@@ -82,16 +83,41 @@
dummy_credentials['region'] = 'eu-west-1'
+def mock_get_parameter(**kwargs):
+ if kwargs.get('Name') == 'simple':
+ return simple_response
+ elif kwargs.get('Name') == '/testpath/won':
+ return simple_won_response
+ elif kwargs.get('Name') == 'missing_variable':
+ warn_response = {'Error': {'Code': 'ParameterNotFound', 'Message': 'Parameter not found'}}
+ operation_name = 'FakeOperation'
+ raise ClientError(warn_response, operation_name)
+ elif kwargs.get('Name') == 'denied_variable':
+ error_response = {'Error': {'Code': 'AccessDeniedException', 'Message': 'Fake Testing Error'}}
+ operation_name = 'FakeOperation'
+ raise ClientError(error_response, operation_name)
+ elif kwargs.get('Name') == 'notfound_variable':
+ error_response = {'Error': {'Code': 'ResourceNotFoundException', 'Message': 'Fake Testing Error'}}
+ operation_name = 'FakeOperation'
+ raise ClientError(error_response, operation_name)
+ else:
+ warn_response = {'Error': {'Code': 'ParameterNotFound', 'Message': 'Parameter not found'}}
+ operation_name = 'FakeOperation'
+ raise ClientError(warn_response, operation_name)
+
+
def test_lookup_variable(mocker):
lookup = aws_ssm.LookupModule()
lookup._load_name = "aws_ssm"
boto3_double = mocker.MagicMock()
- boto3_double.Session.return_value.client.return_value.get_parameters.return_value = simple_variable_success_response
+ boto3_double.Session.return_value.client.return_value.get_parameter.return_value = simple_variable_success_response
boto3_client_double = boto3_double.Session.return_value.client
mocker.patch.object(boto3, 'session', boto3_double)
retval = lookup.run(["simple_variable"], {}, **dummy_credentials)
+ assert(isinstance(retval, list))
+ assert(len(retval) == 1)
assert(retval[0] == "simplevalue")
boto3_client_double.assert_called_with('ssm', 'eu-west-1', aws_access_key_id='notakey',
aws_secret_access_key="notasecret", aws_session_token=None)
@@ -127,10 +153,11 @@
lookup._load_name = "aws_ssm"
boto3_double = mocker.MagicMock()
- boto3_double.Session.return_value.client.return_value.get_parameters.return_value = missing_variable_response
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
mocker.patch.object(boto3, 'session', boto3_double)
retval = lookup.run(["missing_variable"], {}, **dummy_credentials)
+ assert(isinstance(retval, list))
assert(retval[0] is None)
@@ -143,24 +170,145 @@
lookup._load_name = "aws_ssm"
boto3_double = mocker.MagicMock()
- boto3_double.Session.return_value.client.return_value.get_parameters.return_value = some_missing_variable_response
+
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
mocker.patch.object(boto3, 'session', boto3_double)
retval = lookup.run(["simple", "missing_variable", "/testpath/won", "simple"], {}, **dummy_credentials)
+ assert(isinstance(retval, list))
assert(retval == ["simple_value", None, "simple_value_won", "simple_value"])
-error_response = {'Error': {'Code': 'ResourceNotFoundException', 'Message': 'Fake Testing Error'}}
-operation_name = 'FakeOperation'
+def test_warn_notfound_resource(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ with pytest.raises(AnsibleError):
+ mocker.patch.object(boto3, 'session', boto3_double)
+ lookup.run(["notfound_variable"], {}, **dummy_credentials)
-def test_warn_denied_variable(mocker):
+def test_on_missing_wrong_value(mocker):
lookup = aws_ssm.LookupModule()
lookup._load_name = "aws_ssm"
boto3_double = mocker.MagicMock()
- boto3_double.Session.return_value.client.return_value.get_parameters.side_effect = ClientError(error_response, operation_name)
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
- with pytest.raises(AnsibleError):
+ with pytest.raises(AnsibleError) as exc:
+ missing_credentials = copy(dummy_credentials)
+ missing_credentials['on_missing'] = "fake_value_on_missing"
+ mocker.patch.object(boto3, 'session', boto3_double)
+ lookup.run(["simple"], {}, **missing_credentials)
+
+ assert exc.match('"on_missing" must be a string and one of "error", "warn" or "skip"')
+
+
+def test_error_on_missing_variable(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ with pytest.raises(AnsibleError) as exc:
+ missing_credentials = copy(dummy_credentials)
+ missing_credentials['on_missing'] = "error"
+ mocker.patch.object(boto3, 'session', boto3_double)
+ lookup.run(["missing_variable"], {}, **missing_credentials)
+
+ assert exc.match(r"Failed to find SSM parameter missing_variable \(ResourceNotFound\)")
+
+
+def test_warn_on_missing_variable(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ missing_credentials = copy(dummy_credentials)
+ missing_credentials['on_missing'] = "warn"
+ mocker.patch.object(boto3, 'session', boto3_double)
+ retval = lookup.run(["missing_variable"], {}, **missing_credentials)
+ assert(isinstance(retval, list))
+ assert(retval[0] is None)
+
+
+def test_skip_on_missing_variable(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ missing_credentials = copy(dummy_credentials)
+ missing_credentials['on_missing'] = "warn"
+ mocker.patch.object(boto3, 'session', boto3_double)
+ retval = lookup.run(["missing_variable"], {}, **missing_credentials)
+ assert(isinstance(retval, list))
+ assert(retval[0] is None)
+
+
+def test_on_denied_wrong_value(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ with pytest.raises(AnsibleError) as exc:
+ denied_credentials = copy(dummy_credentials)
+ denied_credentials['on_denied'] = "fake_value_on_denied"
mocker.patch.object(boto3, 'session', boto3_double)
- lookup.run(["denied_variable"], {}, **dummy_credentials)
+ lookup.run(["simple"], {}, **denied_credentials)
+
+ assert exc.match('"on_denied" must be a string and one of "error", "warn" or "skip"')
+
+
+def test_error_on_denied_variable(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ with pytest.raises(AnsibleError) as exc:
+ denied_credentials = copy(dummy_credentials)
+ denied_credentials['on_denied'] = "error"
+ mocker.patch.object(boto3, 'session', boto3_double)
+ lookup.run(["denied_variable"], {}, **denied_credentials)
+ assert exc.match(r"Failed to access SSM parameter denied_variable \(AccessDenied\)")
+
+
+def test_warn_on_denied_variable(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ denied_credentials = copy(dummy_credentials)
+ denied_credentials['on_denied'] = "warn"
+ mocker.patch.object(boto3, 'session', boto3_double)
+ retval = lookup.run(["denied_variable"], {}, **denied_credentials)
+ assert(isinstance(retval, list))
+ assert(retval[0] is None)
+
+
+def test_skip_on_denied_variable(mocker):
+ lookup = aws_ssm.LookupModule()
+ lookup._load_name = "aws_ssm"
+
+ boto3_double = mocker.MagicMock()
+ boto3_double.Session.return_value.client.return_value.get_parameter.side_effect = mock_get_parameter
+
+ denied_credentials = copy(dummy_credentials)
+ denied_credentials['on_denied'] = "warn"
+ mocker.patch.object(boto3, 'session', boto3_double)
+ retval = lookup.run(["denied_variable"], {}, **denied_credentials)
+ assert(isinstance(retval, list))
+ assert(retval[0] is None)
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_cloudformation.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_cloudformation.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_cloudformation.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_cloudformation.py 2021-11-12 18:13:53.000000000 +0000
@@ -12,6 +12,9 @@
# Magic...
from ansible_collections.amazon.aws.tests.unit.utils.amazon_placebo_fixtures import maybe_sleep, placeboify # pylint: disable=unused-import
+import ansible_collections.amazon.aws.plugins.module_utils.core as aws_core
+import ansible_collections.amazon.aws.plugins.module_utils.ec2 as aws_ec2
+
from ansible_collections.amazon.aws.plugins.module_utils.ec2 import boto_exception
from ansible_collections.amazon.aws.plugins.modules import cloudformation as cfn_module
@@ -78,8 +81,15 @@
raise Exception('EXIT')
-def test_invalid_template_json(placeboify):
+def _create_wrapped_client(placeboify):
connection = placeboify.client('cloudformation')
+ retry_decorator = aws_ec2.AWSRetry.jittered_backoff()
+ wrapped_conn = aws_core._RetryingBotoClientWrapper(connection, retry_decorator)
+ return wrapped_conn
+
+
+def test_invalid_template_json(placeboify):
+ connection = _create_wrapped_client(placeboify)
params = {
'StackName': 'ansible-test-wrong-json',
'TemplateBody': bad_json_tpl,
@@ -94,7 +104,7 @@
def test_client_request_token_s3_stack(maybe_sleep, placeboify):
- connection = placeboify.client('cloudformation')
+ connection = _create_wrapped_client(placeboify)
params = {
'StackName': 'ansible-test-client-request-token-yaml',
'TemplateBody': basic_yaml_tpl,
@@ -111,7 +121,7 @@
def test_basic_s3_stack(maybe_sleep, placeboify):
- connection = placeboify.client('cloudformation')
+ connection = _create_wrapped_client(placeboify)
params = {
'StackName': 'ansible-test-basic-yaml',
'TemplateBody': basic_yaml_tpl
@@ -127,15 +137,19 @@
def test_delete_nonexistent_stack(maybe_sleep, placeboify):
- connection = placeboify.client('cloudformation')
- result = cfn_module.stack_operation(connection, 'ansible-test-nonexist', 'DELETE', default_events_limit)
+ connection = _create_wrapped_client(placeboify)
+ # module is only used if we threw an unexpected error
+ module = None
+ result = cfn_module.stack_operation(module, connection, 'ansible-test-nonexist', 'DELETE', default_events_limit)
assert result['changed']
assert 'Stack does not exist.' in result['log']
def test_get_nonexistent_stack(placeboify):
- connection = placeboify.client('cloudformation')
- assert cfn_module.get_stack_facts(connection, 'ansible-test-nonexist') is None
+ connection = _create_wrapped_client(placeboify)
+ # module is only used if we threw an unexpected error
+ module = None
+ assert cfn_module.get_stack_facts(module, connection, 'ansible-test-nonexist') is None
def test_missing_template_body():
@@ -159,7 +173,7 @@
on_create_failure='DELETE',
disable_rollback=False,
)
- connection = placeboify.client('cloudformation')
+ connection = _create_wrapped_client(placeboify)
params = {
'StackName': 'ansible-test-on-create-failure-delete',
'TemplateBody': failing_yaml_tpl
@@ -178,7 +192,7 @@
on_create_failure='ROLLBACK',
disable_rollback=False,
)
- connection = placeboify.client('cloudformation')
+ connection = _create_wrapped_client(placeboify)
params = {
'StackName': 'ansible-test-on-create-failure-rollback',
'TemplateBody': failing_yaml_tpl
@@ -198,7 +212,7 @@
on_create_failure='DO_NOTHING',
disable_rollback=False,
)
- connection = placeboify.client('cloudformation')
+ connection = _create_wrapped_client(placeboify)
params = {
'StackName': 'ansible-test-on-create-failure-do-nothing',
'TemplateBody': failing_yaml_tpl
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_ec2_vpc_dhcp_option.py ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_ec2_vpc_dhcp_option.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_ec2_vpc_dhcp_option.py 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/plugins/modules/test_ec2_vpc_dhcp_option.py 2021-11-12 18:13:53.000000000 +0000
@@ -0,0 +1,71 @@
+# (c) 2021 Red Hat Inc.
+#
+# This file is part of Ansible
+# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
+
+from __future__ import (absolute_import, division, print_function)
+__metaclass__ = type
+
+# Magic... Incorrectly identified by pylint as unused
+from ansible_collections.amazon.aws.tests.unit.utils.amazon_placebo_fixtures import placeboify # pylint: disable=unused-import
+from ansible_collections.amazon.aws.tests.unit.compat.mock import patch
+
+from ansible_collections.amazon.aws.plugins.modules import ec2_vpc_dhcp_option as dhcp_module
+from ansible_collections.amazon.aws.tests.unit.plugins.modules.utils import ModuleTestCase
+
+test_module_params = {'domain_name': 'us-west-2.compute.internal',
+ 'dns_servers': ['AmazonProvidedDNS'],
+ 'ntp_servers': ['10.10.2.3', '10.10.4.5'],
+ 'netbios_name_servers': ['10.20.2.3', '10.20.4.5'],
+ 'netbios_node_type': 2}
+
+test_create_config = [{'Key': 'domain-name', 'Values': [{'Value': 'us-west-2.compute.internal'}]},
+ {'Key': 'domain-name-servers', 'Values': [{'Value': 'AmazonProvidedDNS'}]},
+ {'Key': 'ntp-servers', 'Values': [{'Value': '10.10.2.3'}, {'Value': '10.10.4.5'}]},
+ {'Key': 'netbios-name-servers', 'Values': [{'Value': '10.20.2.3'}, {'Value': '10.20.4.5'}]},
+ {'Key': 'netbios-node-type', 'Values': 2}]
+
+
+test_create_option_set = [{'Key': 'domain-name', 'Values': ['us-west-2.compute.internal']},
+ {'Key': 'domain-name-servers', 'Values': ['AmazonProvidedDNS']},
+ {'Key': 'ntp-servers', 'Values': ['10.10.2.3', '10.10.4.5']},
+ {'Key': 'netbios-name-servers', 'Values': ['10.20.2.3', '10.20.4.5']},
+ {'Key': 'netbios-node-type', 'Values': ['2']}]
+
+test_normalize_config = {'domain-name': ['us-west-2.compute.internal'],
+ 'domain-name-servers': ['AmazonProvidedDNS'],
+ 'ntp-servers': ['10.10.2.3', '10.10.4.5'],
+ 'netbios-name-servers': ['10.20.2.3', '10.20.4.5'],
+ 'netbios-node-type': '2'
+ }
+
+
+class FakeModule(object):
+ def __init__(self, **kwargs):
+ self.params = kwargs
+
+ def fail_json(self, *args, **kwargs):
+ self.exit_args = args
+ self.exit_kwargs = kwargs
+ raise Exception('FAIL')
+
+ def fail_json_aws(self, *args, **kwargs):
+ self.exit_args = args
+ self.exit_kwargs = kwargs
+ raise Exception('FAIL')
+
+ def exit_json(self, *args, **kwargs):
+ self.exit_args = args
+ self.exit_kwargs = kwargs
+ raise Exception('EXIT')
+
+
+@patch.object(dhcp_module.AnsibleAWSModule, 'client')
+class TestDhcpModule(ModuleTestCase):
+
+ def test_normalize_config(self, client_mock):
+ result = dhcp_module.normalize_ec2_vpc_dhcp_config(test_create_config)
+
+ print(result)
+ print(test_normalize_config)
+ assert result == test_normalize_config
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/requirements.txt ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/requirements.txt
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/unit/requirements.txt 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/unit/requirements.txt 2021-11-12 18:13:53.000000000 +0000
@@ -1,2 +1,6 @@
+# Our code is based on the AWS SDKs
+botocore
boto3
+boto
+
placebo
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/aws.sh ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/aws.sh
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/aws.sh 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/aws.sh 1970-01-01 00:00:00.000000000 +0000
@@ -1,19 +0,0 @@
-#!/usr/bin/env bash
-
-set -o pipefail -eux
-
-declare -a args
-IFS='/:' read -ra args <<< "$1"
-
-cloud="${args[0]}"
-python="${args[1]}"
-group="${args[2]}"
-
-target="shippable/${cloud}/group${group}/"
-
-stage="${S:-prod}"
-
-# shellcheck disable=SC2086
-ansible-test integration --color -v --retry-on-error "${target}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
- --remote-terminate always --remote-stage "${stage}" \
- --docker --python "${python}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/check_matrix.py ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/check_matrix.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/check_matrix.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/check_matrix.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,120 +0,0 @@
-#!/usr/bin/env python
-"""Verify the currently executing Shippable test matrix matches the one defined in the "shippable.yml" file."""
-from __future__ import (absolute_import, division, print_function)
-__metaclass__ = type
-
-import datetime
-import json
-import os
-import re
-import sys
-import time
-
-try:
- from typing import NoReturn
-except ImportError:
- NoReturn = None
-
-try:
- # noinspection PyCompatibility
- from urllib2 import urlopen # pylint: disable=ansible-bad-import-from
-except ImportError:
- # noinspection PyCompatibility
- from urllib.request import urlopen
-
-
-def main(): # type: () -> None
- """Main entry point."""
- repo_full_name = os.environ['REPO_FULL_NAME']
- required_repo_full_name = 'ansible-collections/amazon.aws'
-
- if repo_full_name != required_repo_full_name:
- sys.stderr.write('Skipping matrix check on repo "%s" which is not "%s".\n' % (repo_full_name, required_repo_full_name))
- return
-
- with open('shippable.yml', 'rb') as yaml_file:
- yaml = yaml_file.read().decode('utf-8').splitlines()
-
- defined_matrix = [match.group(1) for match in [re.search(r'^ *- env: T=(.*)$', line) for line in yaml] if match and match.group(1) != 'none']
-
- if not defined_matrix:
- fail('No matrix entries found in the "shippable.yml" file.',
- 'Did you modify the "shippable.yml" file?')
-
- run_id = os.environ['SHIPPABLE_BUILD_ID']
- sleep = 1
- jobs = []
-
- for attempts_remaining in range(4, -1, -1):
- try:
- jobs = json.loads(urlopen('https://api.shippable.com/jobs?runIds=%s' % run_id).read())
-
- if not isinstance(jobs, list):
- raise Exception('Shippable run %s data is not a list.' % run_id)
-
- break
- except Exception as ex:
- if not attempts_remaining:
- fail('Unable to retrieve Shippable run %s matrix.' % run_id,
- str(ex))
-
- sys.stderr.write('Unable to retrieve Shippable run %s matrix: %s\n' % (run_id, ex))
- sys.stderr.write('Trying again in %d seconds...\n' % sleep)
- time.sleep(sleep)
- sleep *= 2
-
- if len(jobs) != len(defined_matrix):
- if len(jobs) == 1:
- hint = '\n\nMake sure you do not use the "Rebuild with SSH" option.'
- else:
- hint = ''
-
- fail('Shippable run %s has %d jobs instead of the expected %d jobs.' % (run_id, len(jobs), len(defined_matrix)),
- 'Try re-running the entire matrix.%s' % hint)
-
- actual_matrix = dict((job.get('jobNumber'), dict(tuple(line.split('=', 1)) for line in job.get('env', [])).get('T', '')) for job in jobs)
- errors = [(job_number, test, actual_matrix.get(job_number)) for job_number, test in enumerate(defined_matrix, 1) if actual_matrix.get(job_number) != test]
-
- if len(errors):
- error_summary = '\n'.join('Job %s expected "%s" but found "%s" instead.' % (job_number, expected, actual) for job_number, expected, actual in errors)
-
- fail('Shippable run %s has a job matrix mismatch.' % run_id,
- 'Try re-running the entire matrix.\n\n%s' % error_summary)
-
-
-def fail(message, output): # type: (str, str) -> NoReturn
- # Include a leading newline to improve readability on Shippable "Tests" tab.
- # Without this, the first line becomes indented.
- output = '\n' + output.strip()
-
- timestamp = datetime.datetime.utcnow().replace(microsecond=0).isoformat()
-
- # hack to avoid requiring junit-xml, which isn't pre-installed on Shippable outside our test containers
- xml = '''
-
-
-\t
-\t\t
-\t\t\t%s
-\t\t
-\t
-
-''' % (timestamp, message, output)
-
- path = 'shippable/testresults/check-matrix.xml'
- dir_path = os.path.dirname(path)
-
- if not os.path.exists(dir_path):
- os.makedirs(dir_path)
-
- with open(path, 'w') as junit_fd:
- junit_fd.write(xml.lstrip())
-
- sys.stderr.write(message + '\n')
- sys.stderr.write(output + '\n')
-
- sys.exit(1)
-
-
-if __name__ == '__main__':
- main()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/sanity.sh ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/sanity.sh
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/sanity.sh 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/sanity.sh 1970-01-01 00:00:00.000000000 +0000
@@ -1,14 +0,0 @@
-#!/usr/bin/env bash
-
-set -o pipefail -eux
-
-if [ "${BASE_BRANCH:-}" ]; then
- base_branch="origin/${BASE_BRANCH}"
-else
- base_branch=""
-fi
-
-# shellcheck disable=SC2086
-ansible-test sanity --color -v --junit ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} \
- --docker --base-branch "${base_branch}" \
- --allow-disabled
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/shippable.sh ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/shippable.sh
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/shippable.sh 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/shippable.sh 1970-01-01 00:00:00.000000000 +0000
@@ -1,172 +0,0 @@
-#!/usr/bin/env bash
-
-set -o pipefail -eux
-
-declare -a args
-IFS='/:' read -ra args <<< "$1"
-
-script="${args[0]}"
-
-test="$1"
-
-docker images ansible/ansible
-docker images quay.io/ansible/*
-docker ps
-
-for container in $(docker ps --format '{{.Image}} {{.ID}}' | grep -v '^drydock/' | sed 's/^.* //'); do
- docker rm -f "${container}" || true # ignore errors
-done
-
-docker ps
-
-if [ -d /home/shippable/cache/ ]; then
- ls -la /home/shippable/cache/
-fi
-
-command -v python
-python -V
-
-command -v pip
-pip --version
-pip list --disable-pip-version-check
-
-export PATH="${PWD}/bin:${PATH}"
-export PYTHONIOENCODING='utf-8'
-
-if [ "${JOB_TRIGGERED_BY_NAME:-}" == "nightly-trigger" ]; then
- COVERAGE=yes
- COMPLETE=yes
-fi
-
-if [ -n "${COVERAGE:-}" ]; then
- # on-demand coverage reporting triggered by setting the COVERAGE environment variable to a non-empty value
- export COVERAGE="--coverage"
-elif [[ "${COMMIT_MESSAGE}" =~ ci_coverage ]]; then
- # on-demand coverage reporting triggered by having 'ci_coverage' in the latest commit message
- export COVERAGE="--coverage"
-else
- # on-demand coverage reporting disabled (default behavior, always-on coverage reporting remains enabled)
- export COVERAGE="--coverage-check"
-fi
-
-if [ -n "${COMPLETE:-}" ]; then
- # disable change detection triggered by setting the COMPLETE environment variable to a non-empty value
- export CHANGED=""
-elif [[ "${COMMIT_MESSAGE}" =~ ci_complete ]]; then
- # disable change detection triggered by having 'ci_complete' in the latest commit message
- export CHANGED=""
-else
- # enable change detection (default behavior)
- export CHANGED="--changed"
-fi
-
-if [ "${IS_PULL_REQUEST:-}" == "true" ]; then
- # run unstable tests which are targeted by focused changes on PRs
- export UNSTABLE="--allow-unstable-changed"
-else
- # do not run unstable tests outside PRs
- export UNSTABLE=""
-fi
-
-virtualenv --python /usr/bin/python3.7 ~/ansible-venv
-set +ux
-. ~/ansible-venv/bin/activate
-set -ux
-
-pip install setuptools==44.1.0
-
-pip install https://github.com/ansible/ansible/archive/"${A_REV:-devel}".tar.gz --disable-pip-version-check
-
-#ansible-galaxy collection install community.general
-mkdir -p "${HOME}/.ansible/collections/ansible_collections/community"
-mkdir -p "${HOME}/.ansible/collections/ansible_collections/google"
-mkdir -p "${HOME}/.ansible/collections/ansible_collections/openstack"
-cwd=$(pwd)
-cd "${HOME}/.ansible/collections/ansible_collections/"
-git clone https://github.com/ansible-collections/community.general community/general
-git clone https://github.com/ansible-collections/community.aws community/aws
-# community.general requires a lot of things we need to manual pull in
-# once community.general is published this will be handled by galaxy cli
-git clone https://github.com/ansible-collections/ansible_collections_google google/cloud
-git clone https://opendev.org/openstack/ansible-collections-openstack openstack/cloud
-ansible-galaxy collection install ansible.netcommon
-cd "${cwd}"
-
-export ANSIBLE_COLLECTIONS_PATHS="${HOME}/.ansible/"
-SHIPPABLE_RESULT_DIR="$(pwd)/shippable"
-TEST_DIR="${HOME}/.ansible/collections/ansible_collections/amazon/aws/"
-mkdir -p "${TEST_DIR}"
-cp -aT "${SHIPPABLE_BUILD_DIR}" "${TEST_DIR}"
-cd "${TEST_DIR}"
-
-function cleanup
-{
- if [ -d tests/output/coverage/ ]; then
- if find tests/output/coverage/ -mindepth 1 -name '.*' -prune -o -print -quit | grep -q .; then
- # for complete on-demand coverage generate a report for all files with no coverage on the "other" job so we only have one copy
- if [ "${COVERAGE}" == "--coverage" ] && [ "${CHANGED}" == "" ] && [ "${test}" == "sanity/1" ]; then
- stub="--stub"
- else
- stub=""
- fi
-
- # shellcheck disable=SC2086
- ansible-test coverage xml --color --requirements --group-by command --group-by version ${stub:+"$stub"}
- cp -a tests/output/reports/coverage=*.xml "$SHIPPABLE_RESULT_DIR/codecoverage/"
-
- # analyze and capture code coverage aggregated by integration test target if not on 2.9, default if unspecified is devel
- if [ -z "${A_REV:-}" ] || [ "${A_REV:-}" != "stable-2.9" ]; then
- ansible-test coverage analyze targets generate -v "$SHIPPABLE_RESULT_DIR/testresults/coverage-analyze-targets.json"
- fi
-
- # upload coverage report to codecov.io only when using complete on-demand coverage
- if [ "${COVERAGE}" == "--coverage" ] && [ "${CHANGED}" == "" ]; then
- for file in tests/output/reports/coverage=*.xml; do
- flags="${file##*/coverage=}"
- flags="${flags%-powershell.xml}"
- flags="${flags%.xml}"
- # remove numbered component from stub files when converting to tags
- flags="${flags//stub-[0-9]*/stub}"
- flags="${flags//=/,}"
- flags="${flags//[^a-zA-Z0-9_,]/_}"
-
- bash <(curl -s https://codecov.io/bash) \
- -f "${file}" \
- -F "${flags}" \
- -n "${test}" \
- -t bc371da7-e5d2-4743-93b5-309f81d457a4 \
- -X coveragepy \
- -X gcov \
- -X fix \
- -X search \
- -X xcode \
- || echo "Failed to upload code coverage report to codecov.io: ${file}"
- done
- fi
- fi
- fi
- if [ -d tests/output/junit/ ]; then
- cp -aT tests/output/junit/ "$SHIPPABLE_RESULT_DIR/testresults/"
- fi
-
- if [ -d tests/output/data/ ]; then
- cp -a tests/output/data/ "$SHIPPABLE_RESULT_DIR/testresults/"
- fi
-
- if [ -d tests/output/bot/ ]; then
- cp -aT tests/output/bot/ "$SHIPPABLE_RESULT_DIR/testresults/"
- fi
-}
-
-trap cleanup EXIT
-
-if [[ "${COVERAGE:-}" == "--coverage" ]]; then
- timeout=60
-else
- timeout=45
-fi
-
-ansible-test env --dump --show --timeout "${timeout}" --color -v
-
-"tests/utils/shippable/check_matrix.py"
-"tests/utils/shippable/${script}.sh" "${test}"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.py ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.py
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.py 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.py 1970-01-01 00:00:00.000000000 +0000
@@ -1,16 +0,0 @@
-#!/usr/bin/env python3.7
-from __future__ import (absolute_import, division, print_function)
-__metaclass__ = type
-
-import sys
-import time
-
-start = time.time()
-
-sys.stdin.reconfigure(errors='surrogateescape')
-sys.stdout.reconfigure(errors='surrogateescape')
-
-for line in sys.stdin:
- seconds = time.time() - start
- sys.stdout.write('%02d:%02d %s' % (seconds // 60, seconds % 60, line))
- sys.stdout.flush()
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.sh ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.sh
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.sh 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/timing.sh 1970-01-01 00:00:00.000000000 +0000
@@ -1,5 +0,0 @@
-#!/usr/bin/env bash
-
-set -o pipefail -eu
-
-"$@" 2>&1 | "$(dirname "$0")/timing.py"
diff -Nru ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/units.sh ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/units.sh
--- ansible-4.10.0/ansible_collections/amazon/aws/tests/utils/shippable/units.sh 2021-09-16 17:29:04.000000000 +0000
+++ ansible-5.2.0/ansible_collections/amazon/aws/tests/utils/shippable/units.sh 1970-01-01 00:00:00.000000000 +0000
@@ -1,11 +0,0 @@
-#!/usr/bin/env bash
-
-set -o pipefail -eux
-
-declare -a args
-IFS='/:' read -ra args <<< "$1"
-
-version="${args[1]}"
-
-# shellcheck disable=SC2086
-ansible-test units --color -v --docker default --python "${version}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} \
diff -Nru ansible-4.10.0/ansible_collections/ansible/windows/CHANGELOG.rst ansible-5.2.0/ansible_collections/ansible/windows/CHANGELOG.rst
--- ansible-4.10.0/ansible_collections/ansible/windows/CHANGELOG.rst 2021-11-02 23:39:49.000000000 +0000
+++ ansible-5.2.0/ansible_collections/ansible/windows/CHANGELOG.rst 2021-12-20 23:16:24.000000000 +0000
@@ -5,13 +5,33 @@
.. contents:: Topics
-v1.8.0
+v1.9.0
======
Release Summary
---------------
-- Release summary for v1.8.0
+- Release summary for v1.9.0
+
+Minor Changes
+-------------
+
+- win_dsc - deduplicated error writing code with a new function. No actual error text was changed.
+- win_powershell - Added ``$Ansible.Verbosity`` for scripts to adjust code based on the verbosity Ansible is running as
+
+Bugfixes
+--------
+
+- win_command - Use the 24 hour format for the hours of ``start`` and ``end`` - https://github.com/ansible-collections/ansible.windows/issues/303
+- win_copy - improve dest folder size detection to handle broken and recursive symlinks as well as inaccesible folders - https://github.com/ansible-collections/ansible.windows/issues/298
+- win_dsc - Provide better error message when trying to invoke a composite DSC resource
+- win_shell - Use the 24 hour format for the hours of ``start`` and ``end`` - https://github.com/ansible-collections/ansible.windows/issues/303
+- win_updates - Fix return value for ``updates`` and ``filtered_updates`` to match original stucture - https://github.com/ansible-collections/ansible.windows/issues/307
+- win_updates - Fixed issue when attempting to run ``task.ps1`` with a host that has a restrictive execution policy set through GPO
+- win_updates - prevent the host from going to sleep if a low sleep timeout is set - https://github.com/ansible-collections/ansible.windows/issues/310
+
+v1.8.0
+======
Minor Changes
-------------
diff -Nru ansible-4.10.0/ansible_collections/ansible/windows/changelogs/changelog.yaml ansible-5.2.0/ansible_collections/ansible/windows/changelogs/changelog.yaml
--- ansible-4.10.0/ansible_collections/ansible/windows/changelogs/changelog.yaml 2021-11-02 23:39:49.000000000 +0000
+++ ansible-5.2.0/ansible_collections/ansible/windows/changelogs/changelog.yaml 2021-12-20 23:16:24.000000000 +0000
@@ -367,8 +367,6 @@
- win_whoami - Fix conflicts with existing ``LIB`` environment variable
minor_changes:
- win_updates - Added the ``skip_optional`` module option to skip optional updates
- release_summary:
- - Release summary for v1.8.0
fragments:
- 283-win_dns_client_IPv6_DNS.yml
- 290-win_user-state-query.yml
@@ -378,3 +376,38 @@
- win_copy-dst-size.yml
- win_updates-skip-optional.yml
release_date: '2021-11-03'
+ 1.9.0:
+ changes:
+ bugfixes:
+ - win_command - Use the 24 hour format for the hours of ``start`` and ``end``
+ - https://github.com/ansible-collections/ansible.windows/issues/303
+ - win_copy - improve dest folder size detection to handle broken and recursive
+ symlinks as well as inaccesible folders - https://github.com/ansible-collections/ansible.windows/issues/298
+ - win_dsc - Provide better error message when trying to invoke a composite DSC
+ resource
+ - win_shell - Use the 24 hour format for the hours of ``start`` and ``end``
+ - https://github.com/ansible-collections/ansible.windows/issues/303
+ - win_updates - Fix return value for ``updates`` and ``filtered_updates`` to
+ match original stucture - https://github.com/ansible-collections/ansible.windows/issues/307
+ - win_updates - Fixed issue when attempting to run ``task.ps1`` with a host
+ that has a restrictive execution policy set through GPO
+ - win_updates - prevent the host from going to sleep if a low sleep timeout
+ is set - https://github.com/ansible-collections/ansible.windows/issues/310
+ minor_changes:
+ - win_dsc - deduplicated error writing code with a new function. No actual error
+ text was changed.
+ - win_powershell - Added ``$Ansible.Verbosity`` for scripts to adjust code based
+ on the verbosity Ansible is running as
+ release_summary:
+ - Release summary for v1.9.0
+ fragments:
+ - release-1.9.0.yml
+ - win_command-shell-time.yml
+ - win_copy-dest-calc-fix.yml
+ - win_dsc-composite-resource-error.yml
+ - win_dsc-write-error-standardize.yml
+ - win_powershell.yml
+ - win_updates-execution-policy.yml
+ - win_updates-return.yml
+ - win_updates-sleep.yml
+ release_date: '2021-12-21'
diff -Nru ansible-4.10.0/ansible_collections/ansible/windows/docs/ansible.windows.win_find_module.rst ansible-5.2.0/ansible_collections/ansible/windows/docs/ansible.windows.win_find_module.rst
--- ansible-4.10.0/ansible_collections/ansible/windows/docs/ansible.windows.win_find_module.rst 2021-11-02 23:39:49.000000000 +0000
+++ ansible-5.2.0/ansible_collections/ansible/windows/docs/ansible.windows.win_find_module.rst 2021-12-20 23:16:24.000000000 +0000
@@ -270,6 +270,12 @@
+Notes
+-----
+
+.. note::
+ - When scanning directories with a large number of files containing lots of data it is recommended to set ``get_checksum=false``. This will speed up the time it takes to scan the folders as getting a checksum needs to read the contents of every file it returns.
+
Examples
diff -Nru ansible-4.10.0/ansible_collections/ansible/windows/docs/ansible.windows.win_powershell_module.rst ansible-5.2.0/ansible_collections/ansible/windows/docs/ansible.windows.win_powershell_module.rst
--- ansible-4.10.0/ansible_collections/ansible/windows/docs/ansible.windows.win_powershell_module.rst 2021-11-02 23:39:49.000000000 +0000
+++ ansible-5.2.0/ansible_collections/ansible/windows/docs/ansible.windows.win_powershell_module.rst 2021-12-20 23:16:24.000000000 +0000
@@ -206,6 +206,7 @@
- ``$Ansible.Changed`` can be set to ``true`` or ``false`` to reflect whether the module made a change or not. By default this is set to ``true``.
- ``$Ansible.Failed`` can be set to ``true`` if the script wants to return the failure back to the controller.
- ``$Ansible.Tmpdir`` is the path to a temporary directory to use as a scratch location that is cleaned up after the module has finished.
+ - ``$Ansible.Verbosity`` reveals Ansible's verbosity level for this play. Allows the script to set VerbosePreference/DebugPreference based on verbosity. Added in ``1.9.0``.
- Any host/console output like ``Write-Host`` or ``[Console]::WriteLine`` is not considered an output object, they are returned as a string in *host_out* and *host_err*.
- The module will skip running the script when in check mode unless the script defines ``[CmdletBinding(SupportsShouldProcess``]).
@@ -310,6 +311,19 @@
$Ansible.Changed = $true
}
+ - name: Define when to enable Verbose/Debug output
+ ansible.windows.win_powershell:
+ script: |
+ if ($Ansible.Verbosity -ge 3) {
+ $VerbosePreference = "Continue"
+ }
+ if ($Ansible.Verbosity -eq 5) {
+ $DebugPreference = "Continue"
+ }
+ Write-Output "Hello World!"
+ Write-Verbose "Hello World!"
+ Write-Debug "Hello World!"
+
Return Values
diff -Nru ansible-4.10.0/ansible_collections/ansible/windows/docs/ansible.windows.win_updates_module.rst ansible-5.2.0/ansible_collections/ansible/windows/docs/ansible.windows.win_updates_module.rst
--- ansible-4.10.0/ansible_collections/ansible/windows/docs/ansible.windows.win_updates_module.rst 2021-11-02 23:39:49.000000000 +0000
+++ ansible-5.2.0/ansible_collections/ansible/windows/docs/ansible.windows.win_updates_module.rst 2021-12-20 23:16:24.000000000 +0000
@@ -278,7 +278,7 @@
- :ref:`ansible.windows.win_updates ` can take a significant amount of time to complete (hours, in some cases). Performance depends on many factors, including OS version, number of updates, system load, and update server load.
- Beware that just after :ref:`ansible.windows.win_updates ` reboots the system, the Windows system may not have settled yet and some base services could be in limbo. This can result in unexpected behavior. Check the examples for ways to mitigate this.
- More information about PowerShell and how it handles RegEx strings can be found at https://technet.microsoft.com/en-us/library/2007.11.powershell.aspx.
- - The current module doesn't support Systems Center Configuration Manager (SCCM). See L(https://github.com/ansible-collections/ansible.windows/issues/194)
+ - The current module doesn't support Systems Center Configuration Manager (SCCM). See https://github.com/ansible-collections/ansible.windows/issues/194
See Also
@@ -388,12 +388,12 @@
filtered_updates
- complex
+ dictionary
success
-
List of updates that were found but were filtered based on blacklist, whitelist or category_names. The return value is in the same form as updates, along with filtered_reason.
+
Updates that were found but were filtered based on blacklist, whitelist or category_names. The return value is in the same form as updates, along with filtered_reason.
A list of users or groups to add/remove on the User Right.
These can be in the form DOMAIN\user-group, user-group@DOMAIN.COM for domain users/groups.
For local users/groups it can be in the form user-group, .\user-group, SERVERNAME\user-group where SERVERNAME is the name of the remote server.
+
It is highly recommended to use the .\ or SERVERNAME\ prefix to avoid any ambiguity with domain account names or errors trying to lookup an account on a domain controller.
You can also add special local accounts like SYSTEM and others.
Can be set to an empty list with action=set to remove all accounts from the right.
Subnet Range to be associated with the peer-group.
+
Subnet Range to be associated with the peer group.
@@ -2527,7 +2528,7 @@
-
Neighbor address or peer-group.
+
Neighbor address or peer group.
@@ -2544,7 +2545,7 @@
-
Name of the peer-group.
+
Name of the peer group.
@@ -3258,7 +3259,7 @@
Choices:
isis
-
ospf3
+
ospfv3
ospf
attached-host
connected
@@ -3730,7 +3731,8 @@
access_group
- dictionary
+ list
+ / elements=dictionary
@@ -3771,7 +3773,7 @@
Choices:
-
ip
+
ipv4
ipv6
@@ -4626,7 +4628,7 @@
-
Subnet Range to be associated with the peer-group.
+
Subnet Range to be associated with the peer group.
@@ -6233,7 +6235,7 @@
-
Name of the peer-group.
+
Name of the peer group.
@@ -6984,7 +6986,7 @@
Choices:
isis
-
ospf3
+
ospfv3
ospf
attached-host
connected
@@ -7061,6 +7063,46 @@
+ imported_route
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Export routes imported from the same Afi/Safi.
+
+
+
+
+
+
+
+
+ route_map
+
+
+ string
+
+
+
+
+
+
Name of a route map.
+
+
+
+
+
+
+
+
target
@@ -7073,6 +7115,29 @@
Route Target.
+
+
+
+
+
+
+ type
+
+
+ string
+
+
+
+
Choices:
+
evpn
+
vpn-ipv4
+
vpn-ipv6
+
+
+
+
Type of address fmaily
+
+
@@ -7494,7 +7559,7 @@
-----
.. note::
- - Tested against Arista EOS 4.23.0F
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options `_.
@@ -7572,7 +7637,7 @@
# distance bgp 50 50 50
# maximum-paths 55
# bgp additional-paths send any
- # neighbor peer1 peer-group
+ # neighbor peer1 peer group
# neighbor peer1 link-bandwidth update-delay 5
# neighbor peer1 fall-over bfd
# neighbor peer1 monitoring
@@ -7713,7 +7778,7 @@
# distance bgp 50 50 50
# maximum-paths 55
# bgp additional-paths send any
- # neighbor peer1 peer-group
+ # neighbor peer1 peer group
# neighbor peer1 link-bandwidth update-delay 5
# neighbor peer1 fall-over bfd
# neighbor peer1 monitoring
@@ -8018,7 +8083,7 @@
# "maximum-paths 55",
# "distance bgp 50",
# "exit",
- # "no neighbor peer1 peer-group",
+ # "no neighbor peer1 peer group",
# "no neighbor peer1 link-bandwidth update-delay 5",
# "no neighbor peer1 fall-over bfd",
# "no neighbor peer1 monitoring",
@@ -8257,7 +8322,7 @@
# distance bgp 50 50 50
# maximum-paths 55
# bgp additional-paths send any
- # neighbor peer1 peer-group
+ # neighbor peer1 peer group
# neighbor peer1 link-bandwidth update-delay 5
# neighbor peer1 fall-over bfd
# neighbor peer1 monitoring
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_bgp_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_bgp_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_bgp_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_bgp_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -16,7 +16,7 @@
DEPRECATED
----------
-:Removed in collection release after 2023-01-29
+:Removed in collection release after 2023-01-01
:Why: Updated module released with more functionality.
:Alternative: eos_bgp_global
@@ -325,7 +325,7 @@
Choices:
-
ospf3
+
ospfv3
ospf
isis
static
@@ -774,7 +774,7 @@
Choices:
ospf
-
ospf3
+
ospfv3
static
connected
rip
@@ -853,7 +853,7 @@
-----
.. note::
- - Tested against Arista vEOS Version 4.15.9M.
+ - Tested against Arista EOS 4.24.6F
@@ -888,7 +888,6 @@
- protocol: isis
route_map: RMAP_1
operation: merge
-
- name: Configure BGP neighbors
arista.eos.eos_bgp:
config:
@@ -901,13 +900,11 @@
timers:
keepalive: 300
holdtime: 360
-
- neighbor: 192.0.2.15
remote_as: 64496
description: IBGP_NBR_2
ebgp_multihop: 150
operation: merge
-
- name: Configure root-level networks for BGP
arista.eos.eos_bgp:
config:
@@ -916,12 +913,10 @@
- prefix: 203.0.113.0
masklen: 27
route_map: RMAP_1
-
- prefix: 203.0.113.32
masklen: 27
route_map: RMAP_2
operation: merge
-
- name: Configure BGP neighbors under address family mode
arista.eos.eos_bgp:
config:
@@ -932,12 +927,10 @@
- neighbor: 203.0.113.10
activate: yes
default_originate: true
-
- neighbor: 192.0.2.15
activate: yes
graceful_restart: true
operation: merge
-
- name: remove bgp as 64496 from config
arista.eos.eos_bgp:
config:
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_command_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_command_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_command_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_command_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -363,7 +363,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_config_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_config_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_config_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_config_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -577,7 +577,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- Abbreviated commands are NOT idempotent, see `Network FAQ <../network/user_guide/faq.html#why-do-the-config-modules-always-return-changed-true-with-abbreviated-commands>`_.
- To ensure idempotency and correct diff the configuration lines in the relevant module options should be similar to how they appear if present in the running configuration on device including the indentation.
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_interface_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_interface_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_interface_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_interface_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -714,7 +714,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_interfaces_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_interfaces_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_interfaces_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_interfaces_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -223,7 +223,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interface_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interface_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interface_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interface_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -507,7 +507,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interfaces_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interfaces_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interfaces_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l2_interfaces_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -223,7 +223,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interface_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interface_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interface_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interface_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -434,7 +434,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interfaces_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interfaces_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interfaces_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_l3_interfaces_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -228,7 +228,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_. 'eos_l2_interfaces/eos_interfaces' should be used for preparing the interfaces , before applying L3 configurations using this module (eos_l3_interfaces).
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_interfaces_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_interfaces_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_interfaces_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_interfaces_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -85,7 +85,7 @@
- rate
+ timer
string
@@ -99,6 +99,7 @@
Rate at which PDUs are sent by LACP. At fast rate LACP is transmitted once every 1 second. At normal rate LACP is transmitted every 30 seconds after the link is bundled.
+
aliases: rate
@@ -151,7 +152,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lacp_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -133,7 +133,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lag_interfaces_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lag_interfaces_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lag_interfaces_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lag_interfaces_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -173,7 +173,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_linkagg_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_linkagg_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_linkagg_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_linkagg_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -496,7 +496,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_global_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_global_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_global_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_global_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -309,7 +309,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_interfaces_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_interfaces_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_interfaces_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_interfaces_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -155,7 +155,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_lldp_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -298,7 +298,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_logging_global_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_logging_global_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_logging_global_module.rst 1970-01-01 00:00:00.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_logging_global_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -0,0 +1,1818 @@
+.. _arista.eos.eos_logging_global_module:
+
+
+*****************************
+arista.eos.eos_logging_global
+*****************************
+
+**Manages logging resource module**
+
+
+Version added: 3.0.0
+
+.. contents::
+ :local:
+ :depth: 1
+
+
+Synopsis
+--------
+- This module configures and manages the attributes of logging on Arista EOS platforms.
+
+
+
+
+Parameters
+----------
+
+.. raw:: html
+
+
+
+
Parameter
+
Choices/Defaults
+
Comments
+
+
+
+
+ config
+
+
+ dictionary
+
+
+
+
+
+
A dictionary of logging options
+
+
+
+
+
+
+ buffered
+
+
+ dictionary
+
+
+
+
+
+
Set buffered logging parameters.
+
+
+
+
+
+
+
+ buffer_size
+
+
+ integer
+
+
+
+
+
+
Logging buffer size
+
+
+
+
+
+
+
+ severity
+
+
+ string
+
+
+
+
Choices:
+
emergencies
+
alerts
+
critical
+
errors
+
warnings
+
notifications
+
informational
+
debugging
+
+
+
+
Severity level .
+
+
+
+
+
+
+
+ console
+
+
+ dictionary
+
+
+
+
+
+
Set console logging parameters.
+
+
+
+
+
+
+
+ severity
+
+
+ string
+
+
+
+
Choices:
+
emergencies
+
alerts
+
critical
+
errors
+
warnings
+
notifications
+
informational
+
debugging
+
+
+
+
Severity level .
+
+
+
+
+
+
+
+ event
+
+
+ string
+
+
+
+
Choices:
+
link-status
+
port-channel
+
spanning-tree
+
+
+
+
Global events
+
+
+
+
+
+
+ facility
+
+
+ string
+
+
+
+
Choices:
+
auth
+
cron
+
daemon
+
kern
+
local0
+
local1
+
local2
+
local3
+
local4
+
local5
+
local6
+
local7
+
lpr
+
mail
+
news
+
sys10
+
sys11
+
sys12
+
sys13
+
sys14
+
sys9
+
syslog
+
user
+
uucp
+
+
+
+
Set logging facility.
+
+
+
+
+
+
+ format
+
+
+ dictionary
+
+
+
+
+
+
Set logging format parameters
+
+
+
+
+
+
+
+ hostname
+
+
+ string
+
+
+
+
+
+
Specify hostname logging format.
+
+
+
+
+
+
+
+ sequence_numbers
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
No. of log messages.
+
+
+
+
+
+
+
+ timestamp
+
+
+ dictionary
+
+
+
+
+
+
Set timestamp logging parameters.
+
+
+
+
+
+
+
+
+ high_resolution
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
RFC3339 timestamps.
+
+
+
+
+
+
+
+
+ traditional
+
+
+ dictionary
+
+
+
+
+
+
Traditional syslog timestamp format as specified in RFC3164.
+
+
+
+
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
enabled
+
disabled
+
+
+
+
When enabled traditional timestamp format is set.
+
+
+
+
+
+
+
+
+
+ timezone
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Show timezone in traditional format timestamp
+
+
+
+
+
+
+
+
+
+ year
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Show year in traditional format timestamp
+
+
+
+
+
+
+
+
+
+ hosts
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
Set syslog server IP address and parameters.
+
+
+
+
+
+
+
+ add
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Configure ports on the given host.
+
+
+
+
+
+
+
+ name
+
+
+ string
+
+
+
+
+
+
Hostname or IP address of the syslog server.
+
+
+
+
+
+
+
+ port
+
+
+ integer
+
+
+
+
+
+
Port of the syslog server.
+
+
+
+
+
+
+
+ protocol
+
+
+ string
+
+
+
+
Choices:
+
tcp
+
udp
+
+
+
+
Set syslog server transport protocol
+
+
+
+
+
+
+
+ remove
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Remove configured ports from the given host
+
+
+
+
+
+
+
+ level
+
+
+ dictionary
+
+
+
+
+
+
Configure logging severity
+
+
+
+
+
+
+
+ facility
+
+
+ string
+
+
+
+
+
+
Facility level
+
+
+
+
+
+
+
+ severity
+
+
+ string
+
+
+
+
Choices:
+
emergencies
+
alerts
+
critical
+
errors
+
warnings
+
notifications
+
informational
+
debugging
+
+
+
+
Severity level .
+
+
+
+
+
+
+
+ monitor
+
+
+ string
+
+
+
+
+
+
Set terminal monitor severity
+
+
+
+
+
+
+ persistent
+
+
+ dictionary
+
+
+
+
+
+
Save logging messages to the flash disk.
+
+
+
+
+
+
+
+ set
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Save logging messages to the flash dis.
+
+
+
+
+
+
+
+ size
+
+
+ integer
+
+
+
+
+
+
The maximum size (in bytes) of logging file stored on flash disk.
+
+
+
+
+
+
+
+ policy
+
+
+ dictionary
+
+
+
+
+
+
Configure logging policies.
+
+
+
+
+
+
+
+ invert_result
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Invert the match of match-list.
+
+
+
+
+
+
+
+ match_list
+
+
+ string
+
+
+
+
+
+
Configure logging message filtering.
+
+
+
+
+
+
+
+ qos
+
+
+ integer
+
+
+
+
+
+
Set DSCP value in IP header.
+
+
+
+
+
+
+ relogging_interval
+
+
+ integer
+
+
+
+
+
+
Configure relogging-interval for critical log messages
+
+
+
+
+
+
+ repeat_messages
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Repeat messages instead of summarizing number of repeats
+
+
+
+
+
+
+ source_interface
+
+
+ string
+
+
+
+
+
+
Use IP Address of interface as source IP of log messages.
+
+
+
+
+
+
+ synchronous
+
+
+ dictionary
+
+
+
+
+
+
Set synchronizing unsolicited with solicited messages
+
+
+
+
+
+
+
+ level
+
+
+ string
+
+
+
+
+
+
Configure logging severity
+
+
+
+
+
+
+
+ set
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Set synchronizing unsolicited with solicited messages.
+
+
+
+
+
+
+
+ trap
+
+
+ dictionary
+
+
+
+
+
+
Severity of messages sent to the syslog server.
+
+
+
+
+
+
+
+ set
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Severity of messages sent to the syslog server.
+
+
+
+
+
+
+
+ severity
+
+
+ string
+
+
+
+
Choices:
+
emergencies
+
alerts
+
critical
+
errors
+
warnings
+
notifications
+
informational
+
debugging
+
+
+
+
Severity level .
+
+
+
+
+
+
+
+ turn_on
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Turn on logging.
+
+
+
+
+
+
+ vrfs
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
Specify vrf
+
+
+
+
+
+
+
+ hosts
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
Set syslog server IP address and parameters.
+
+
+
+
+
+
+
+
+ add
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Configure ports on the given host.
+
+
+
+
+
+
+
+
+ name
+
+
+ string
+
+
+
+
+
+
Hostname or IP address of the syslog server.
+
+
+
+
+
+
+
+
+ port
+
+
+ integer
+
+
+
+
+
+
Port of the syslog server.
+
+
+
+
+
+
+
+
+ protocol
+
+
+ string
+
+
+
+
Choices:
+
tcp
+
udp
+
+
+
+
Set syslog server transport protocol
+
+
+
+
+
+
+
+
+ remove
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Remove configured ports from the given host
+
+
+
+
+
+
+
+
+ name
+
+
+ string
+
+
+
+
+
+
vrf name.
+
+
+
+
+
+
+
+ source_interface
+
+
+ string
+
+
+
+
+
+
Use IP Address of interface as source IP of log messages.
+
+
+
+
+
+
+
+ running_config
+
+
+ string
+
+
+
+
+
+
This option is used only with state parsed.
+
The value of this option should be the output received from the EOS device by executing the command show running-config | section access-list.
+
The states replaced and overridden have identical behaviour for this module.
+
The state parsed reads the configuration from running_config option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the parsed key within the result.
Authentication required only for incoming NTP server responses.
+
+
+
+
+
+
+
+ authentication_keys
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
Define a key to use for authentication.
+
+
+
+
+
+
+
+ algorithm
+
+
+ string
+
+
+
+
Choices:
+
md5
+
sha1
+
+
+
+
hash algorithm,
+
+
+
+
+
+
+
+ encryption
+
+
+ integer
+
+
+
+
Choices:
+
0
+
7
+
+
+
+
key type
+
+
+
+
+
+
+
+ id
+
+
+ integer
+
+
+
+
+
+
key identifier.
+
+
+
+
+
+
+
+ key
+
+
+ string
+
+
+
+
+
+
Unobfuscated key string.
+
+
+
+
+
+
+
+ local_interface
+
+
+ string
+
+
+
+
+
+
Configure the interface from which the IP source address is taken.
+
+
+
+
+
+
+ qos_dscp
+
+
+ integer
+
+
+
+
+
+
Set DSCP value in IP header
+
+
+
+
+
+
+ serve
+
+
+ dictionary
+
+
+
+
+
+
Configure the switch as an NTP server.
+
+
+
+
+
+
+
+ access_lists
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
Configure access control list.
+
+
+
+
+
+
+
+
+ acls
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
Access lists to be configured under the afi
+
+
+
+
+
+
+
+
+
+ acl_name
+
+
+ string
+
+
+
+
+
+
Name of the access list.
+
+
+
+
+
+
+
+
+
+ direction
+
+
+ string
+
+
+
+
Choices:
+
in
+
out
+
+
+
+
direction for the packets.
+
+
+
+
+
+
+
+
+
+ vrf
+
+
+ string
+
+
+
+
+
+
VRF in which to apply the access control list.
+
+
+
+
+
+
+
+
+
+ afi
+
+
+ string
+
+
+
+
+
+
ip/ipv6 config commands.
+
+
+
+
+
+
+
+
+ all
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Service NTP requests received on any interface.
+
+
+
+
+
+
+
+ servers
+
+
+ list
+ / elements=dictionary
+
+
+
+
+
+
Configure NTP server to synchronize to.
+
+
+
+
+
+
+
+ burst
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Send a burst of packets instead of the usual one.
+
+
+
+
+
+
+
+ iburst
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Send bursts of packets until the server is reached
+
+
+
+
+
+
+
+ key_id
+
+
+ integer
+
+
+
+
+
+
Set a key to use for authentication.
+
+
+
+
+
+
+
+ local_interface
+
+
+ string
+
+
+
+
+
+
Configure the interface from which the IP source address is taken.
+
+
+
+
+
+
+
+ maxpoll
+
+
+ integer
+
+
+
+
+
+
Maximum poll interval.
+
+
+
+
+
+
+
+ minpoll
+
+
+ integer
+
+
+
+
+
+
Minimum poll interval.
+
+
+
+
+
+
+
+ prefer
+
+
+ boolean
+
+
+
+
Choices:
+
no
+
yes
+
+
+
+
Mark this server as preferred.
+
+
+
+
+
+
+
+ server
+
+
+ string
+ / required
+
+
+
+
+
+
Hostname or A.B.C.D or A:B:C:D:E:F:G:H.
+
+
+
+
+
+
+
+ source
+
+
+ string
+
+
+
+
+
+
Configure the interface from which the IP source address is taken.
+
+
+
+
+
+
+
+ version
+
+
+ integer
+
+
+
+
+
+
NTP version.
+
+
+
+
+
+
+
+ vrf
+
+
+ string
+
+
+
+
+
+
vrf name.
+
+
+
+
+
+
+
+ trusted_key
+
+
+ string
+
+
+
+
+
+
Configure the set of keys that are accepted for incoming messages
+
+
+
+
+
+
+ running_config
+
+
+ string
+
+
+
+
+
+
This option is used only with state parsed.
+
The value of this option should be the output received from the EOS device by executing the command show running-config | section ntp.
+
The state parsed reads the configuration from running_config option and transforms it into Ansible structured data as per the resource module's argspec and the value is then returned in the parsed key within the result.
+
+
+
+
+
+ state
+
+
+ string
+
+
+
+
Choices:
+
deleted
+
merged ←
+
overridden
+
replaced
+
gathered
+
rendered
+
parsed
+
+
+
+
The state the configuration should be left in.
+
The states replaced and overridden have identical behaviour for this module.
This command is deprecated by 'timers lsa' or 'timers spf'.
@@ -3564,7 +3886,7 @@
-----
.. note::
- - Tested against Arista EOS 4.23.0F
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
@@ -3603,6 +3925,7 @@
# veos#show running-config | section ospfv3
# router ospfv3 vrf vrfmerge
# router-id 2.2.2.2
+ # test
# fips restrictions
# timers pacing flood 55
# !
@@ -4153,7 +4476,7 @@
# distance ospf intra-area 200
# fips restrictions
# area 0.0.0.1 stub
- # timers throttle spf 56 56 56
+ # timers spf delay initial 56 56 56
# timers out-delay 10
@@ -4292,11 +4615,10 @@
# "router_id": "10.17.0.3",
# "timers": {
# "out_delay": 10,
- # "throttle": {
+ # "spf": {
# "initial": 56,
# "max": 56,
# "min": 56,
- # "spf": true
# }
# }
# }
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_prefix_lists_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_prefix_lists_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_prefix_lists_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_prefix_lists_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -375,7 +375,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_route_maps_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_route_maps_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_route_maps_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_route_maps_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -2785,7 +2785,7 @@
-----
.. note::
- - Tested against Arista EOS 4.23.0F
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_route_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_route_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_route_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_route_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -470,7 +470,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_routes_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_routes_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_routes_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_static_routes_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -397,7 +397,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_system_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_system_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_system_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_system_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -375,7 +375,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
@@ -448,7 +448,7 @@
The list of configuration mode commands to send to the device
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_user_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_user_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_user_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_user_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -586,7 +586,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlan_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlan_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlan_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlan_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -525,7 +525,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlans_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlans_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlans_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_vlans_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -152,7 +152,7 @@
-----
.. note::
- - Tested against Arista EOS 4.20.10M
+ - Tested against Arista EOS 4.24.6F
- This module works with connection ``network_cli``. See the `EOS Platform Options <../network/user_guide/platform_eos.html>`_.
diff -Nru ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_vrf_module.rst ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_vrf_module.rst
--- ansible-4.10.0/ansible_collections/arista/eos/docs/arista.eos.eos_vrf_module.rst 2021-06-23 14:56:01.000000000 +0000
+++ ansible-5.2.0/ansible_collections/arista/eos/docs/arista.eos.eos_vrf_module.rst 2021-09-24 18:21:24.000000000 +0000
@@ -46,7 +46,7 @@
-
List of VRFs definitions
+
List of VRFs instances
@@ -514,7 +514,7 @@
-----
.. note::
- - Tested against EOS 4.15
+ - Tested against Arista EOS 4.24.6F
- For information on using CLI, eAPI and privileged mode see the :ref:`EOS Platform Options guide `
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide `
- For more information on using Ansible to manage Arista EOS devices see the `Arista integration page `_.
@@ -586,7 +586,7 @@
The list of configuration mode commands to send to the device