Cisco Openstack h.3
h.3 is a maintenance release that includes miscellaneous Puppet fixes as well as 2013.2.3 patches from upstream.
Milestone information
- Project:
- Cisco Openstack
- Series:
- havana
- Version:
- h.3
- Released:
- Registrant:
- Mark T. Voelker
- Release registered:
- Active:
- No. Drivers cannot target bugs and blueprints to this milestone.
Activities
- Assigned to you:
- No blueprints or bugs assigned to you.
- Assignees:
- 4 Chris Ricker, 8 Mark T. Voelker, 3 Pradeep Kilambi
- Blueprints:
- No blueprints are targeted to this milestone.
- Bugs:
- 16 Fix Released
Download files for this release
Release notes
This release contains all upstream stable branch patches for OpenStack core components including Nova, Neutron, Cinder, Glance, Keystone, Horizon, Heat, Ceilometer, and Swift as of 2013.2.3 and selected stable branch fixes thereafter. This release also features Puppet automation updates from StackForge.
Getting the Source and Installing the Release
-------
Information about how to install Cisco OpenStack Edition - Havana can be found here:
http://
Note that the instructions provided will install the latest stable maintenance patches for Havana, which may be newer than those in the h.3 release. If you need to specifically install the versions of the packages released in h.3 even though newer maintenance updates are available, you can modify your apt configuration to point to our h.3 snapshot repository. Execute these steps:
1.) Edit /etc/apt/
deb http://
deb-src http://
to:
deb http://
deb-src http://
2.) Run "apt-get update" as root or via sudo. You may see a harmless warning of the form "Conflicting distribution: http://
3.) To ensure changes are propagated by Puppet, modify data/hiera_
pocket: ''
to:
pocket: '/snapshots/h.3'
Then modify install_
deb http://
deb-src http://
To:
deb http://
deb-src http://
Then proceed with your installation as usual.
Note that you may also choose to run bleeding-edge (and not fully vetted) code by using "-proposed" in place of "/snapshots/h.3" in the above instructions. This is recommended only for developers who expect things to be broken occasionally and is strongly discouraged for production environments.
The source code for this release has been posted to GitHub. Look for the "h.3" tag in the repositories listed here:
https:/
Limitations:
----------------
We do not support Trove, Marconi, Savanna, Ironic, or other incubated projects in this release although relevant repositories may be made available for customers to experiment with on their own.
Questions may be addressed to openstack-
Some known limitations and caveats are provided in the installer documentation here:
http://
Customers deploying Ceph should take note of deployment considerations here:
http://
Ceph RBD can be used as a backend for Glance and/or Cinder in this release and is now the default block storage backend for Cinder in Full HA deployments. The module used to deploy Ceph in Grizzly release of Cisco OpenStack Installer has been replaced with a new module based on ceph-deploy, which dramatically speeds up deployment times and doesn't require multiple catalog runs. However, the ceph-deploy tool uses hostname information (e.g. via DNS) to accomplish it's tasks. Customers should therefore ensure that the IP address that the hostname of the initial mon node resolves to is also be configured in user.common.yaml as the inital mon, and the ceph networks should be on the same subnet as that IP address. Otherwise, cephdeploy may fail to set up the cluster properly.
Compressed Active/Active HA Deployments
-------
This release features a "compressed" HA model in which a full high availability model can be achieved on as little as 3 physical nodes. Swift is currently not included in the 3-node HA model (though this could be changed by the user with some changes to the YAML files in /etc/puppet/data). Glance and Cinder are backed by Ceph.
The traditional 13-node "Full HA" model is also available in this release.
All-in-One Deployment Model
-------
The h.3 release includes an "all in one" deployment model that allows users to combine compute and control functions on a single server. The AIO model in h.3 differs from the model provided in g.3 in that it now includes support for Swift Proxy and Storage node roles as well. The all-in-one deployment model can be used in place of a traditional control node to add additional compute/storage capacity to a multi-node cloud by allowing instances to be launched on control nodes.
Load Balancing as a Service, Firewall as a Service, and VPN as a Service
-------
This release of the Cisco OpenStack Installer provides support for deploying Neutron's Load Balancer as a Service (backended by HAProxy), Firewall as a Service (backed by IPTables), and VPN as a Service (backed by OpenSwan). All are enabled by default. Customers deploying these services may wish to take note of some known issues with these implementations described here:
http://
SSL Encryption for Keystone API Endpoint
-------
This release features support for configuring Keystone to use SSL, allowing users to provide their own CA-signed certificates and keys. Please note that automatic generation of certificates and keys is not supported in this release due to apparent issues with Keystone's ssl_setup feature. More information can be found in /etc/puppet/
Limited Live Migration Support
-------
This release of the Cisco OpenStack Installer provides limited support for live migration, including live migration on NFS shared storage.
Cisco Plugin Support for Neutron
-------
This release provides users with several options for networking, including the use of the OVS and/or Cisco Nexus plugins. Information on configuring the Cisco Nexus plugin can be found in /etc/puppet/
Change in Database HA Model
-------
The OpenStack community recently discovered that the "select ... for update" used in Neutron, parts of Nova, and possibly other components may produce undefined results manifesting as an apparent database deadlock when used with Galera. Although the community is working to rectify the problem in Juno, in the meantime the configuration for the HAProxy load balancers front-ending Galera writes has been changed to avoid the deadlock issue. In the new model, all writes are performed on one node and the other two database nodes function as sequential backups. This change ensures data is still replicated, but writes are performed on a single node instead of being balanced among three nodes (which may cause a performance drop in large-scale deployments).
Please refer to the following URL for more information, caveats, and installation instructions:
http://