diff -Nru drbd-doc-8.4~20151102/articles/drbd-dif-dix.txt drbd-doc-8.4~20220106/articles/drbd-dif-dix.txt --- drbd-doc-8.4~20151102/articles/drbd-dif-dix.txt 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/articles/drbd-dif-dix.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,239 +0,0 @@ -= DRBD and Linux Data Integrity Extensions -Andreas Grünbacher - -== Introduction - -The data integrity extensions described in -http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/block/data-integrity.txt[Documentation/block/data-integrity.txt] -in the kernel attach 8 bytes of extra data to each 512-byte block (or -either 8 or 8x8 bytes to each 4096-byte block). SCSI disks long could -be low-level formatted to sectors bigger than 512 bytes; this is -mostly used by RAID array controllers for storing internal metadata. - -The idea of the data integrity extensions is to standardize how the -extra bytes per sector are used and to expose them to layers further -up the stack: to the block layer, the file system layer, or to the -application itself. The data integrity extensions (DIF, DIX aka. T10 -PI) define how to split up the available space into fields, and how to -compute those fields (see -http://oss.oracle.com/~mkp/docs/dix-draft.pdf[I/O Controller Data -Integrity Extensions]). - -== Status - -The Linux data integrity extensions are supposed to support data -integrity on SCSI and SATA block devices. As of 2.6.36-rc4, it seems -that DIF is supported on (at least) SCSI disks which have been -formatted with DIF support; it is unclear which scsi drivers support -this feature, though. The block layer can be configured to generate -and/or check the data integrity fields in -+/sys/block//integrity/+. - -No file systems currently use the data integrity framework, but modern -file systems like btrfs may optionally start supporting this in the -near future. (Btrfs already checksums all blocks it writes using -crc32c, while DIF/DIX supports a different crc variant.) - -Oracle with their proprietary Automatic Storage Management (ASM) -kernel module uses data integrity in the application (and thus on the -complete path between application and disk). - -A general-purpose user-space interface for data integrity information -currently does not exist. - -== DRBD - -DRBD sits at the block layer, below the file system. - -DRBD already supports a data integrity feature, but this feature does -not extend to other layers in the I/O stack: if enabled, a checksum is -computed on the primary node and verified on the secondary node before -writing the data to disk. This can be used to detect memory -corruption, changes to pages which are under I/O, and network -corruption (that is, it is a second layer of protection on the -network). - -It would make sense for DRBD to support the data integrity framework: -when enabled, this could replace DRBD's current data integrity feature -while offering additional features. Depending on the configuration, -DRBD could compute and/or verify the data integrity information, and -pass it on to layers further up or down in the I/O stack. - -More specifically, DRBD would have to be extended to support -transporting the extra bytes over the network. It could then use this -as follows: - -* As a substitute for its own data integrity feature. In this configuration, - -** Upon write, the primary node would compute the DIF/DIX checksums - and the secondary node would verify them. Neither node would not - pass the checksums on to the lower layer. - -** Upon read from the secondary node, the secondary node would compute - the DIF/DIX checksums and the primary node would verify them. - -* If supported by the lower layer, DIF/DIX checksums could be computed - by DRBD on the primary node upon write, passed on to the lower layer - on both sides, and verified when returned by the lower layer (upon - read). Computing and verifying checksums should probably be - configurable separately. - -* If the upper layer supports DIF/DIX checksums, DRBD could verify the - provided checksums on the primary and secondary node upon write. If - supported by the lower layer, the checksums could be passed on. - Otherwise, they would have to be discarded and recomputed upon read - in this configuration. - -* If the lower any upper layers support DIF/DIX checksums, DRBD's role - would be reduced to verifying checksums when configured to do so. - -Independent of DIF/DIX support, DRBD's strategy for dealing with data -inconsistencies could be improved: - -* First, if data integrity information is available in writes, either - through DRBD's current data integrity feature or through DIF/DIX, - and an inconsistency is detected on the secondary node, DRBD could - signal to the primary node to retry only this particular I/O (a - limited number of times). DRBD's current strategy is to mark the - entire device as inconsistent and trigger a resync. - -* Second, if data integrity information is available in reads and an - inconsistency is detected, DRBD could degrade more gracefully and - retry the read locally and/or remotely. DRBD's current strategy is - to treat inconsistencies like I/O errors; upon an I/O error, it - marks the entire device as inconsistent and stops using it (it - "disconnects" it). - -== Various Notes == - -=== Conversation with Martin K. Petersen, November 2010 === - ------------------------------------------------------------ - can I ask some stupid questions even before reading all the docs? - go ahead - how wide spread is support for this from the OS down by now? does oracle use it from - the app down already? - the Oracle DB with ASM supports it - we announced GA wrt. the software stack in September - drives have been shipping for a couple of years, host adapters are shipping with - various levels of support implemented - how about iscsi? how about linux software raid? - iSCSI DIF support is supposed to show up in nab's target stack shortly - DM and MD support passing protected I/Os down to the hw drivers - hmm, very nice! - how about 4k sectors, have those been standardized yet? - I guess for drbd you'd need to add support to the daemon - it would have to store the PI somewhere - and the client would have to send it, of course, with whatever that entails in terms of - the wire protocol - 4k + 8 works fine - 4k + 8 * 8 has been sorted out in T10 and is part of DIX 1.1 which we're wrapping up soon - yes, daemon and protocol ... we have a different data integrity feature right now, and - I'm trying to figure out how to merge that together ... won't happen overnight, but it - seems to make a lot of sense to me - we're also working on several non-SCSI type technologies that use the same format - so one set of PI can be prepared regardless of what the target device is and how it's - connected - the PI format is kind of stupid but it's good enough that it made sense to standardize on - it - okay, so how standardized is it? - well, even the non-SCSI devices have decided to implement support for T10 Type 1, 2 and 3 - is it always the exact same type of CRC for example? - yep - so the PI that gets prepared is the same regardless of target type and protocol - also makes it possible to mirror between say SCSI and drbd - is the APP tag entirely unused by anything below the application? - the app tag is owned by the owner of the block device - in the Oracle case we actually use it - we also have some impending changes in T10 that will allow the storage device to check it - hmm, okay ... so this would commonly be the file system, or the app if the app doesn't - use a file system, right? - none of the Linux filesystems use it yet - and the filesystem can decide to use it for internal purposes or it can let the - applications supply it - okay ... how would the storage device know what to expect in the APP tag? - that's what we're working on in T10 right now - the current proposal involves a mode page where you can set - and the storage device will reject writes to a given block range if the app tag is - incorrect - so lba refers to the REF tag then? - lba refers to the actual target lba - this is not supported for Type 2 devices because the expected app tag is included in the - CDB - I'll need to read a bit more to really understand this - for Type 1 the ref tag is the LBA - for Type 2 the ref tag and the app tag are supplied in the SCSI command - for Type 3 the ref tag and the app tag are opaque storage - okay, thanks a lot ... I will try to make sense of what you've told me and see how we - can make drbd fit in. ------------------------------------------------------------ - ------------------------------------------------------------ - one more thing though: - one of the problems we are seeing is blocks that are being modified while they are - under io: this can happen with mmap, when directly writing to a device, and even file - systems do it sometimes. - yeah, my headache #1 - okay, so we have the same headache then :( - at the storage summit the VM folks chastised the FS folks - and told them to stop it - it was agreed that the FS folks would stop this practice - that's not the entire story though ... - well, for direct I/O it's up to the application - for mmap we can unmap while the page is being submitted. That works fine - yes, and we'll have to somehow live with nasty/buggy applications even once all the - file systems have become nice citizens - yeah, but thankfully there are not that many applications that would be affected - yeah, probably not - so for mmap an app woul ddirty the page, the page would end up in writeout, would be - unmapped, and if the app again writes it, the page fault handler would block until the - io (in this case, the write) has completed? - correct - basically we'd unmap when the writeback bit is set - the VM already does this for most I/O - the problem is that extN in particular use buffer heads and thus completely ignore the - page writeback bit - Ted converted ext4 to bios a couple of weeks ago - okay, my naive approach would have been to try setting the page ro and do a copy on - write, but there are probably reasons against that - getting rid of buffer heads will make everything much easier - we've talked about cow but the VM folks said that they unmap anyway - I've got a couple of concalls coming up now. But feel free to send mail - the difference is that apps might end up blocked much more without cow, no? - yup - I'll continue reading now. thanks for all the info, I'm sure we'll continue this - conversation sooner or later. ------------------------------------------------------------ - -=== How To Experiment Without DIF Hardware === - ------------------------------------------------------------ -modprobe scsi_debug protection=1 guard=1 ato=1 dev_size_mb=1024 ------------------------------------------------------------ - -=== RHEL6 Known Issues === - -When using the DIF/DIX hardware checksum features of a storage path -behind a block device, errors will occur if the block device is used -as a general purpose block device. - -Buffered I/O or mmap(2) based IO will not work reliably as there are -no interlocks in the buffered write path to prevent overwriting cached -data while the hardware is performing DMA operations. An overwrite -during a DMA operation will cause a torn write and the write will fail -checksums in the hardware storage path. This problem is common to all -block device or file system based buffered or mmap(2) I/O, so the -problem of I/O errors during overwrites cannot be worked around. - -DIF/DIX enabled block devices should only be used with applications -that use +O_DIRECT+ I/O. Applications should use the raw block device, -though it should be safe to use the XFS file system on a DIF/DIX -enabled block device if only +O_DIRECT+ I/O is issued through the file -system. In both cases the responsibility for preventing torn writes -lies with the application, so only applications designed for use with -+O_DIRECT+ I/O and DIF/DIX hardware should enable this feature. - -== References - * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/block/data-integrity.txt[Documentation/block/data-integrity.txt] - * http://oss.oracle.com/~mkp/[Martin K. Petersen's homepage at Oracle] (and http://oss.oracle.com/projects/data-integrity/[Oracle's landing page]) - * Martin K. Petersen: http://oss.oracle.com/~mkp/docs/dix-draft.pdf[I/O Controller Data Integrity Extensions] diff -Nru drbd-doc-8.4~20151102/AUTHORS drbd-doc-8.4~20220106/AUTHORS --- drbd-doc-8.4~20151102/AUTHORS 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/AUTHORS 1970-01-01 00:00:00.000000000 +0000 @@ -1,4 +0,0 @@ -DRBD Documentation Credits - -* DRBD User's Guide: written by Florian Haas, based on earlier work by - Philipp Reisner and Lars Ellenberg diff -Nru drbd-doc-8.4~20151102/BUGS drbd-doc-8.4~20220106/BUGS --- drbd-doc-8.4~20151102/BUGS 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/BUGS 1970-01-01 00:00:00.000000000 +0000 @@ -1,6 +0,0 @@ -KNOWN BUGS IN THE DOCUMENTATION SYSTEM - -* The mathmlsvg utility appears to be broken (it segfaults) in Ubuntu - hardy, thus MathML to SVG conversion does not work on this - platform. Do not expect to find any rendered mathematical formulae - in the generated HTML when working on hardy. This is fixed in intrepid. diff -Nru drbd-doc-8.4~20151102/cheatsheets/Makefile.am drbd-doc-8.4~20220106/cheatsheets/Makefile.am --- drbd-doc-8.4~20151102/cheatsheets/Makefile.am 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/cheatsheets/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,15 +0,0 @@ -# Some useful wildcard expansions -TXT_FILES ?= $(wildcard *.txt) -MML_FILES ?= $(wildcard *.mml) -SVG_FILES ?= $(wildcard *.svg) - -all: html pdf - -html: $(TXT_FILES) $(MML_FILES:.mml=.svg) $(SVG_FILES:.svg=.png) - -pdf: $(TXT_FILES) $(MML_FILES:.mml=.svg) - -force: ; - -%: force - @$(MAKE) -f $(top_srcdir)/Makefile $@ diff -Nru drbd-doc-8.4~20151102/debian/changelog drbd-doc-8.4~20220106/debian/changelog --- drbd-doc-8.4~20151102/debian/changelog 2020-06-26 22:21:19.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/changelog 2022-01-31 11:09:06.000000000 +0000 @@ -1,9 +1,25 @@ -drbd-doc (8.4~20151102-1.1) unstable; urgency=medium +drbd-doc (8.4~20220106-1) unstable; urgency=medium - * Non-maintainer upload. - * Fix FTBFS. (Closes: #959655) + * Switch Vcs-* URLs to salsa.d.o + * Switch to new upstream documentation from + https://github.com/LINBIT/linbit-documentation + - Remove makedoc + - Drop all patches + - Remove obsolete d/README.packaging + + Adjust build-dependencies. + The new documentation system requires only asciidoctor(-pdf) and + inkscape. (Closes: #993661) + + d/rules: adjust to the new build system + + Adjust Source and Upstream-Name in d/copyright + + dh_install: pick the right artifacts for installation + * Bump DH compat to 13; no changes needed + * d/copyright: adjust years, drop makedoc reference + * Bump Standards-Version to 4.6.0. + + Replace Priority: extra with optional + * d/copyright: remove tabs + * Strip remote fonts from the HTML stylesheet - -- Sudip Mukherjee Fri, 26 Jun 2020 23:21:19 +0100 + -- Apollon Oikonomopoulos Mon, 31 Jan 2022 13:09:06 +0200 drbd-doc (8.4~20151102-1) unstable; urgency=medium diff -Nru drbd-doc-8.4~20151102/debian/compat drbd-doc-8.4~20220106/debian/compat --- drbd-doc-8.4~20151102/debian/compat 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/compat 1970-01-01 00:00:00.000000000 +0000 @@ -1 +0,0 @@ -9 diff -Nru drbd-doc-8.4~20151102/debian/control drbd-doc-8.4~20220106/debian/control --- drbd-doc-8.4~20151102/debian/control 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/control 2022-01-31 11:07:02.000000000 +0000 @@ -1,19 +1,18 @@ Source: drbd-doc Section: doc -Priority: extra +Priority: optional Maintainer: Debian DRBD Maintainers Uploaders: Apollon Oikonomopoulos -Build-Depends: debhelper (>= 9), automake, asciidoc, fop, - libgtkmathview-bin, inkscape, xsltproc, docbook-xsl, - dh-autoreconf, docbook-xml, default-jre -Standards-Version: 3.9.7 +Build-Depends: debhelper-compat (= 13), dh-exec, + asciidoctor, ruby-asciidoctor-pdf, inkscape, zip +Standards-Version: 4.6.0 Homepage: http://www.drbd.org -Vcs-Git: https://anonscm.debian.org/git/debian-ha/drbd-doc.git -Vcs-Browser: https://anonscm.debian.org/gitweb/?p=debian-ha/drbd-doc.git +Vcs-Git: https://salsa.debian.org/ha-team/drbd-doc.git +Vcs-Browser: https://salsa.debian.org/ha-team/drbd-doc Package: drbd-doc Architecture: all -Depends: ${misc:Depends} +Depends: ${misc:Depends}, fonts-font-awesome Description: RAID 1 over TCP/IP for Linux (user documentation) Drbd is a block device which is designed to build high availability clusters by providing a virtual shared device which keeps disks in diff -Nru drbd-doc-8.4~20151102/debian/copyright drbd-doc-8.4~20220106/debian/copyright --- drbd-doc-8.4~20151102/debian/copyright 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/copyright 2022-01-31 11:09:06.000000000 +0000 @@ -1,30 +1,13 @@ Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ -Upstream-Name: drbd-documentation -Source: - The main documentation source comes from LINBIT's git repository: - http://git.linbit.com/gitweb.cgi?p=drbd-documentation.git;a=summary - . - The documentation's build system, makedoc, is also developed by LINBIT and was - downloaded from: - https://github.com/fghaas/makedoc - . -Comment: - Since makedoc is exclusively used by the DRBD documentation, it has been - embedded in the drbd-doc source (using an additional tarball), rather than - shipped as a separate package. +Upstream-Name: linbit-documentation +Source: https://github.com/LINBIT/linbit-documentation Files: * -Copyright: Copyright © 2008-2009 LINBIT Information Technologies GmbH - Copyright © 2009-2012 LINBIT HA Solutions GmbH +Copyright: Copyright © LINBIT HA Solutions GmbH License: CC-BY-SA-3.0 -Files: makedoc/* -Copyright: Copyright © 2008-2009 LINBIT Information Technologies GmbH - Copyright © 2009-2012 LINBIT HA Solutions GmbH -License: GPL-2+ - Files: debian/* -Copyright: 2014-2016 Apollon Oikonomopoulos +Copyright: 2014-2022 Apollon Oikonomopoulos License: GPL-2+ License: GPL-2+ @@ -173,21 +156,21 @@ to Distribute and Publicly Perform Adaptations. . For the avoidance of doubt: - Non-waivable Compulsory License Schemes. In those jurisdictions in - which the right to collect royalties through any statutory or - compulsory licensing scheme cannot be waived, the Licensor reserves - the exclusive right to collect such royalties for any exercise by - You of the rights granted under this License; - Waivable Compulsory License Schemes. In those jurisdictions in which - the right to collect royalties through any statutory or compulsory - licensing scheme can be waived, the Licensor waives the exclusive - right to collect such royalties for any exercise by You of the - rights granted under this License; and, - Voluntary License Schemes. The Licensor waives the right to collect - royalties, whether individually or, in the event that the Licensor - is a member of a collecting society that administers voluntary - licensing schemes, via that society, from any exercise by You of the - rights granted under this License. + Non-waivable Compulsory License Schemes. In those jurisdictions in + which the right to collect royalties through any statutory or + compulsory licensing scheme cannot be waived, the Licensor reserves + the exclusive right to collect such royalties for any exercise by + You of the rights granted under this License; + Waivable Compulsory License Schemes. In those jurisdictions in which + the right to collect royalties through any statutory or compulsory + licensing scheme can be waived, the Licensor waives the exclusive + right to collect such royalties for any exercise by You of the + rights granted under this License; and, + Voluntary License Schemes. The Licensor waives the right to collect + royalties, whether individually or, in the event that the Licensor + is a member of a collecting society that administers voluntary + licensing schemes, via that society, from any exercise by You of the + rights granted under this License. . The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such diff -Nru drbd-doc-8.4~20151102/debian/drbd-doc.install drbd-doc-8.4~20220106/debian/drbd-doc.install --- drbd-doc-8.4~20151102/debian/drbd-doc.install 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/drbd-doc.install 2022-01-31 11:07:02.000000000 +0000 @@ -1,5 +1,5 @@ -users-guide/*.html usr/share/doc/drbd-doc/users-guide -users-guide/*.png usr/share/doc/drbd-doc/users-guide -default.css usr/share/doc/drbd-doc/users-guide -images/*.png usr/share/doc/drbd-doc/users-guide/images -users-guide/drbd-users-guide.pdf usr/share/doc/drbd-doc +#!/usr/bin/dh-exec +UG8.4/en/output-html/drbd-users-guide-without-css.html => usr/share/doc/drbd-doc/users-guide/drbd-users-guide.html +UG8.4/en/output-html/*.css usr/share/doc/drbd-doc/users-guide +UG8.4/en/output-html/images/ usr/share/doc/drbd-doc/users-guide +UG8.4/en/output-pdf/drbd-users-guide.pdf usr/share/doc/drbd-doc diff -Nru drbd-doc-8.4~20151102/debian/gbp.conf drbd-doc-8.4~20220106/debian/gbp.conf --- drbd-doc-8.4~20151102/debian/gbp.conf 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/gbp.conf 2022-01-31 11:09:06.000000000 +0000 @@ -1,3 +1,10 @@ +[buildpackage] +no-create-orig = False +pristine-tar-commit = True + +[dch] +git-log = --first-parent + [import-orig] # We do not want git-import-orig to work. # See debian/README.packaging. diff -Nru drbd-doc-8.4~20151102/debian/patches/css-enhancements.patch drbd-doc-8.4~20220106/debian/patches/css-enhancements.patch --- drbd-doc-8.4~20151102/debian/patches/css-enhancements.patch 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/patches/css-enhancements.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,36 +0,0 @@ -Author: Apollon Oikonomopoulos -Description: CSS enhancements - Use rounded corners in code listings and notes. - -Forwarded: not-needed -Last-Update: 2014-07-06 ---- a/drbd-howto-collection.css -+++ b/drbd-howto-collection.css -@@ -18,6 +18,10 @@ - /* Notes, warnings, and cautions get a light grey background, with large rounded corners */ - .note, .warning, .caution, .important, .tip { - background: #d3d3d3; -+ border-radius: 30px; -+ -moz-border-radius: 30px; -+ padding: 10px; -+ margin-right: 0px !important; - } - .note:before, .warning:before, .caution:before, .important:before, .tip:before { - background: transparent url(images/top-right.png) scroll no-repeat top right; -@@ -42,8 +46,15 @@ - - /* programlistings and screen dumps get a light grey background, with small rounded corners. */ - .screen, .programlisting { -- background: #d3d3d3; -+ background: #f7f5f2; -+ border-radius: 15px; -+ -moz-border-radius: 15px; -+ padding: 5px; -+ margin-left: 2px; -+ padding-left: 2px; -+ border-left: 1px solid #ff6600; - } -+ - .screen:before, .programlisting:before { - background: transparent url(images/top-right-small.png) scroll no-repeat top right; - margin-bottom: -5px; diff -Nru drbd-doc-8.4~20151102/debian/patches/drop-manpages.patch drbd-doc-8.4~20220106/debian/patches/drop-manpages.patch --- drbd-doc-8.4~20151102/debian/patches/drop-manpages.patch 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/patches/drop-manpages.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,16 +0,0 @@ -Author: Apollon Oikonomopoulos -Description: Do not embed the manpages in the User's Guide - Embedding the manpages requires a checkout of the DRBD source. - -Last-Update: 2014-07-05 -Forwarded: not-needed ---- a/users-guide/drbd-users-guide.txt -+++ b/users-guide/drbd-users-guide.txt -@@ -48,7 +48,6 @@ - = Appendices - - include::recent-changes.txt[] --include::man-pages.txt[] - - [index] - = Index diff -Nru drbd-doc-8.4~20151102/debian/patches/fix_inkscape.patch drbd-doc-8.4~20220106/debian/patches/fix_inkscape.patch --- drbd-doc-8.4~20151102/debian/patches/fix_inkscape.patch 2020-06-26 22:15:09.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/patches/fix_inkscape.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,45 +0,0 @@ -Description: Fix arguments to inkscape - -Author: Sudip Mukherjee -Bug-Debian: https://bugs.debian.org/959655 - ---- - ---- drbd-doc-8.4~20151102.orig/makedoc/Makefile.am -+++ drbd-doc-8.4~20151102/makedoc/Makefile.am -@@ -341,7 +341,7 @@ endif - # Generated images: Encapsulated PostScript from SVG - if USE_INKSCAPE - %.eps: %.svg -- $(INKSCAPE) --file=$< --export-area-drawing --export-eps=$@ -+ $(INKSCAPE) --export-area-drawing --export-type=eps --export-filename=$@ $< - endif - - # Generated images: PNG from SVG -@@ -349,7 +349,7 @@ endif - if RENDER_SVG - %.png: %.svg - if USE_INKSCAPE -- $(INKSCAPE) --file=$< --export-dpi=90 --export-area-drawing --export-png=$@ -+ $(INKSCAPE) --export-dpi=90 --export-area-drawing --export-type=png --export-filename=$@ $< - endif - if USE_RSVG - $(RSVG) --dpi-x=90 --dpi-y=90 --format=png $< $@ -@@ -358,7 +358,7 @@ endif - # Half-size SVG (from PNG) - %-small.png: %.svg - if USE_INKSCAPE -- $(INKSCAPE) --file=$< --export-dpi=45 --export-area-drawing --export-png=$@ -+ $(INKSCAPE) --export-dpi=45 --export-area-drawing --export-type=png --export-filename=$@ $< - endif - if USE_RSVG - $(RSVG) --dpi-x=45 --dpi-y=45 --format=png $< $@ -@@ -367,7 +367,7 @@ endif - # Double-size SVG (from PNG) - %-large.png: %.svg - if USE_INKSCAPE -- $(INKSCAPE) --file=$< --export-dpi=180 --export-area-drawing --export-png=$@ -+ $(INKSCAPE) --export-dpi=180 --export-area-drawing --export-type=png --export-filename=$@ $< - endif - if USE_RSVG - $(RSVG) --dpi-x=180 --dpi-y=180 --format=png $< $@ diff -Nru drbd-doc-8.4~20151102/debian/patches/mention-drbd.conf.5.patch drbd-doc-8.4~20220106/debian/patches/mention-drbd.conf.5.patch --- drbd-doc-8.4~20151102/debian/patches/mention-drbd.conf.5.patch 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/patches/mention-drbd.conf.5.patch 1970-01-01 00:00:00.000000000 +0000 @@ -1,40 +0,0 @@ -Author: Apollon Oikonomopoulos -Description: Replace hyperlinks to drbd.conf(5) - Mention drbd.conf(5) directly, because hyperlinks won't work as we don't ship - the manpages. - -Forwarded: not-needed -Last-Update: 2014-07-07 ---- a/users-guide/administration.txt -+++ b/users-guide/administration.txt -@@ -1521,7 +1521,7 @@ - - NOTE: DRBD understands additional keywords for these three options, - which have been omitted here because they are very rarely used. Refer --to <> for details on split brain recovery keywords not -+to +drbd.conf(5)+ for details on split brain recovery keywords not - discussed here. - - For example, a resource which serves as the block device for a GFS or ---- a/users-guide/configure.txt -+++ b/users-guide/configure.txt -@@ -123,7 +123,7 @@ - This section describes only those few aspects of the configuration - file which are absolutely necessary to understand in order to get DRBD - up and running. The configuration file's syntax and contents are --documented in great detail in <>. -+documented in great detail in +drbd.conf(5)+. - - - [[s-drbdconf-example]] ---- a/users-guide/features.txt -+++ b/users-guide/features.txt -@@ -506,7 +506,7 @@ - - DRBD's <> is asynchronous, but the - writing application will block as soon as the socket output buffer is --full (see the sndbuf-size option in <>). In that event, -+full (see the sndbuf-size option in +drbd.conf(5)+). In that event, - the writing application has to wait until some of the data written - runs off through a possibly small bandwidth network link. - diff -Nru drbd-doc-8.4~20151102/debian/patches/series drbd-doc-8.4~20220106/debian/patches/series --- drbd-doc-8.4~20151102/debian/patches/series 2020-06-26 22:14:08.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/patches/series 1970-01-01 00:00:00.000000000 +0000 @@ -1,4 +0,0 @@ -drop-manpages.patch -css-enhancements.patch -mention-drbd.conf.5.patch -fix_inkscape.patch diff -Nru drbd-doc-8.4~20151102/debian/README.packaging drbd-doc-8.4~20220106/debian/README.packaging --- drbd-doc-8.4~20151102/debian/README.packaging 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/README.packaging 1970-01-01 00:00:00.000000000 +0000 @@ -1,77 +0,0 @@ -drbd-doc packaging information -━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ - -1. General information -—————————————————————— -The drbd-doc source combines the source from two distinct upstream locations: - - • The DRBD User's Guide[1] - • The "makedoc" documentation build system[2] - -Makedoc is an autotools-based asciidoc build system written by LINBIT. Since it -is exclusively used by the drbd documentation, we decided not to ship it as a -separate package, but instead embed it into the drbd-doc source using the 3.0 -source format's multiple tarball support. - - -2. Versioning scheme -———————————————————— -Neither the User's Guide, nor makedoc have actual versions, so we use a custom -scheme of ~-, e.g. -8.4~20130416-1. - - -3. Git repository layout -———————————————————————— -The packaging is maintained using a git repository at - -http://anonscm.debian.org/gitweb/?p=debian-ha/drbd-doc.git - -The repository has a git-buildpackage compatible layout with the following -branches: - - • master: the branch the drbd-doc package is built from. - - • upstream/doc: a branch tracking upstream's drbd-documentation git - repository[1]. - - • upstream/makedoc: a branch tracking upstream's makedoc git repository[2], - with all upstream source moved to makedoc/. - - -4. git-buildpackage compatibility -————————————————————————————————— -The package can be built using git-buildpackage, as long as the upstream -tarballs are found in the parent directory. Note that gbp does not (yet? see -#561071) support multiple upstream branches, so the orig tarballs have to be -generated manually using pristine-tar checkout. - - -5. Importing a new upstream version -——————————————————————————————————— -To generate a package from a new upstream versions, follow these steps: - - ₁ Pull the upstream sources in upstream/doc and upstream/makedoc - - ₂ Tag the releases in one or both upstream branches if applicable, e.g. - git tag -s doc/20130416 upstream/doc - - ₃ Merge the tags to master: - git merge doc/20130416 - - ₄ Generate *both* orig tarballs: - git archive --format=tar --prefix=drbd-doc-20130416/ doc/20130416 \ - | xz > ../drbd-doc_20130416.orig.tar.xz - git archive --format=tar makedoc/20120726 \ - | xz > ../drbd-doc_20130416.orig-makedoc.tar.xz - - ₅ Commit *both* tarballs to pristine-tar, specifying also the tag they were - generated from: - pristine-tar commit ../drbd-doc_8.4\~20130416.orig.tar.xz doc/20130416 - pristine-tar commit ../drbd-doc_8.4\~20130416.orig-makedoc.tar.xz \ - makedoc/20120726 - -[1] git://git.drbd.org/drbd-documentation.git -[2] https://github.com/fghaas/makedoc.git - - -- Apollon Oikonomopoulos Mon, 07 Jul 2014 12:06:29 +0300 diff -Nru drbd-doc-8.4~20151102/debian/rules drbd-doc-8.4~20220106/debian/rules --- drbd-doc-8.4~20151102/debian/rules 2020-06-26 21:53:39.000000000 +0000 +++ drbd-doc-8.4~20220106/debian/rules 2022-01-31 11:07:02.000000000 +0000 @@ -1,43 +1,17 @@ #!/usr/bin/make -f #DH_VERBOSE = 1 -export MAKEDOC=$(CURDIR)/makedoc +UG_HTML = "UG8.4/en/output-html/drbd-users-guide-without-css.html" %: - dh $@ --with autoreconf - -override_dh_auto_configure: - ./configure \ - --with-asciidoc-doctype=book \ - --enable-html-chunking \ - --enable-section-numbers \ - --enable-asciidoc-docinfo \ - --with-stylesheets=/usr/share/xml/docbook/stylesheet/docbook-xsl + dh $@ override_dh_auto_build: - cp drbd-howto-collection.css default.css - $(MAKE) -C users-guide all - -# make install does nothing useful -override_dh_auto_install: + $(MAKE) -C UG8.4 html-finalize pdf + sed -i -e '\# - -
- Feedback - Any questions or comments about this document are highly - appreciated and much encouraged. Please contact the author(s) - directly; contact email addresses are listed on the title - page. - For a public discussion about the concepts mentioned in this - white paper, you are invited to subscribe and post to the - drbd-user mailing list. Please see for - details. -
diff -Nru drbd-doc-8.4~20151102/.gitignore drbd-doc-8.4~20220106/.gitignore --- drbd-doc-8.4~20151102/.gitignore 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/.gitignore 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,12 @@ +tech-guides/ +linbit-fonts/ +UG9/*/output-pdf* +UG9/*/output-html* +UG8.4/*/output-pdf* +UG8.4/*/output-html* +UG8.4/ja/*.adoc +UG9/ja/*.adoc +UG8.4/cn/*.adoc +UG9/cn/*.adoc +UG8.4/en/*.pot +UG9/en/*.pot diff -Nru drbd-doc-8.4~20151102/.gitlab-ci.yml drbd-doc-8.4~20220106/.gitlab-ci.yml --- drbd-doc-8.4~20151102/.gitlab-ci.yml 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/.gitlab-ci.yml 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,56 @@ +stages: + - build + - preview + - deploy + +build: + stage: build + image: + name: $CI_REGISTRY/linbit/linbit-documentation + entrypoint: [""] + rules: + - if: $CI_MERGE_REQUEST_ID + script: + - make UG9-html-finalize + artifacts: + paths: + - UG9/en/output-html-finalize/*.zip + +preview: + stage: preview + image: $LINBIT_DOCKER_REGISTRY/ug-preview:latest + rules: + - if: $CI_MERGE_REQUEST_ID + environment: + name: preview/$CI_COMMIT_REF_SLUG + url: $LINBIT_REGISTRY_URL/repository/pages/$CI_COMMIT_REF_SLUG/ + script: + - zips=$(realpath ./UG9/en/output-html-finalize/*.zip) + - mkdir /index + - cd /build + - | + for zip in $zips; do + prefix=$(basename $zip .zip) + php linbit-drbd.php $zip /out + ./upload.sh $CI_COMMIT_REF_SLUG /out $prefix + echo "

$prefix

" >> /index/index.html + done + ./upload.sh $CI_COMMIT_REF_SLUG /index . + dependencies: + - build + +deploy: + stage: deploy + image: + name: $CI_REGISTRY/linbit/linbit-documentation + entrypoint: [""] + rules: + - if: $CI_COMMIT_BRANCH == 'master' + before_script: + - echo "DEPLOY" + script: + - mkdir -p ~/src && cp -r . ~/src && cd ~/src + - cp /linbit-documentation/GNUmakefile . + - cp -r /linbit-documentation/genshingothic-fonts . + - make trusthosts all-clean DOCKER=no SFTPUSER=$SFTPUSERSTAGING STAGING=yes + - make trusthosts all DOCKER=no SFTPUSER=$SFTPUSER SKIPGENERATE=yes diff -Nru drbd-doc-8.4~20151102/GNUmakefile drbd-doc-8.4~20220106/GNUmakefile --- drbd-doc-8.4~20151102/GNUmakefile 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/GNUmakefile 1970-01-01 00:00:00.000000000 +0000 @@ -1,29 +0,0 @@ -MAKEDOC_SYMLINK_TARGETS = $(MAKEDOC)/Makefile.am $(MAKEDOC)/configure.ac.stub $(MAKEDOC)/autogen.sh -MAKEDOC_SYMLINKS = $(subst $(MAKEDOC)/, , $(MAKEDOC_SYMLINK_TARGETS)) -MISSING_VARIABLE = Please set the MAKEDOC variable pointing to your makedoc checkout. - -.PHONY: makedoc-symlinks -makedoc-symlinks: $(MAKEDOC_SYMLINKS) -ifndef MAKEDOC - $(warning $(MISSING_VARIABLE)) -endif - $(info Now run ./autogen.sh) - -.PHONY: clean-makedoc-symlinks -clean-makedoc-symlinks: -ifndef MAKEDOC - $(error $(MISSING_VARIABLE)) -else - rm $(MAKEDOC_SYMLINKS) -f -endif - -$(MAKEDOC_SYMLINKS): -ifndef MAKEDOC - $(warning $(MISSING_VARIABLE)) -else - ln -s $(MAKEDOC)/$@ . -endif - -%: - @$(MAKE) -f Makefile $@ - diff -Nru drbd-doc-8.4~20151102/howto-collection.xml drbd-doc-8.4~20220106/howto-collection.xml --- drbd-doc-8.4~20151102/howto-collection.xml 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/howto-collection.xml 1970-01-01 00:00:00.000000000 +0000 @@ -1,29 +0,0 @@ - - - - The DRBD Howto collection - LINBIT Information Technologies GmbH - - This is the definitive collection of Howto documents - related to DRBD (Distributed Replicated Block Device). It will - always remain a work in progress. - The collection currently includes: - - - The DRBD Users' Guide - - - - The collection does not yet include, but should include at - some point: - - - The DRBD Developers' Guide (to be written) - - - - - - - - diff -Nru drbd-doc-8.4~20151102/images/al-extents-example.svg drbd-doc-8.4~20220106/images/al-extents-example.svg --- drbd-doc-8.4~20151102/images/al-extents-example.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/al-extents-example.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,45 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/al-extents.svg drbd-doc-8.4~20220106/images/al-extents.svg --- drbd-doc-8.4~20151102/images/al-extents.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/al-extents.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,38 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/connection-mesh.svg drbd-doc-8.4~20220106/images/connection-mesh.svg --- drbd-doc-8.4~20151102/images/connection-mesh.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/connection-mesh.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,26 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-dolphin-opensuse11.1-screenshot-cluster_edit.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-dolphin-opensuse11.1-screenshot-cluster_edit.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-dolphin-opensuse11.1-screenshot-ixadmin.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-dolphin-opensuse11.1-screenshot-ixadmin.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-dolphin-opensuse11.1-screenshot-ixadmin_test.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-dolphin-opensuse11.1-screenshot-ixadmin_test.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-geoclustering-rhel7-Screenshot_20170620_145416.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-geoclustering-rhel7-Screenshot_20170620_145416.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-geoclustering-rhel7-Screenshot_20170620_150005.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-geoclustering-rhel7-Screenshot_20170620_150005.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-geoclustering-rhel7-Screenshot_20170620_152241.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-geoclustering-rhel7-Screenshot_20170620_152241.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-geoclustering-rhel7-Screenshot_20170620_152306.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-geoclustering-rhel7-Screenshot_20170620_152306.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-geoclustering-rhel7-Screenshot_20170620_153624.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-geoclustering-rhel7-Screenshot_20170620_153624.png differ diff -Nru drbd-doc-8.4~20151102/images/drbd8-geo-nodist-one-rz-one-res.svg drbd-doc-8.4~20220106/images/drbd8-geo-nodist-one-rz-one-res.svg --- drbd-doc-8.4~20151102/images/drbd8-geo-nodist-one-rz-one-res.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd8-geo-nodist-one-rz-one-res.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,779 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + Local HA + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + HA-Cluster with oneExample Service.Data Center on Site 1,with Desaster Recoveryto 2nd Site + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Local IP + Local IP + + to 2nd Data Center + + Filesystem + + Service + + + Service IP + Master + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-add-portal.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-add-portal.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-chap-auth.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-chap-auth.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-connected-target.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-connected-target.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-discovered-target.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-discovered-target.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-diskman-initialized.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-diskman-initialized.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-diskman-initialize.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-diskman-initialize.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-diskman-uninitialized.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-diskman-uninitialized.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-iscsi-nodist-screenshot-windows-logon-target.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-iscsi-nodist-screenshot-windows-logon-target.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-kvm-nodist-Screenshot-1.jpg and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-kvm-nodist-Screenshot-1.jpg differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-kvm-nodist-Screenshot-2.jpg and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-kvm-nodist-Screenshot-2.jpg differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-kvm-nodist-Screenshot-3.jpg and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-kvm-nodist-Screenshot-3.jpg differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-kvm-nodist-Screenshot-4.jpg and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-kvm-nodist-Screenshot-4.jpg differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-bigpicture.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-bigpicture.png differ diff -Nru drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-bigpicture.svg drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-bigpicture.svg --- drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-bigpicture.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-bigpicture.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,1964 @@ + +image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + kvm-ovirtm + + + + + + + + + + + + iscsi + + + + + + + + + + + + + + + + + + + + + + + + + + oVirtmVM + + + + + + + + + + + store1 + + + + + + + + + + + + oVirt iSCSI initiator + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + VM + + + + + + + + + + + + + + + + + + + + + + + + + + VM + + + + + + + + + + + + + + + + + + + + + + + + + + + VM + + + + + + + + + + local disk + + + + + + + + + + LVM + + + + + + + + + + DRBD + + + + + + + + + + iSCSI + + + + + + + + + + oVirt LVM + + + + + + + + + + oVirt VMs + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ovirt-hyp1 + + + + + + + + + + LV kvm_oVirtm + + + + + + + + kvm-ovirtm + + + + + +iscsi + + + + + + + + oVirtmVM + + + + + +store1 + + + + + +oVirt iSCSI initiator + + + + + + + + VM + + + + + +local disk + + + + + + + + + + + ovirt-hyp2 + + + + + +LV kvm_oVirtm + + + + + + + + VM + + + + + + + + + + + + + + + + LV iscsi + + + + + + + + LV iscsi + + + + + + + + VM + + + + + + + \ No newline at end of file Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-hyp_host.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-hyp_host.png differ diff -Nru drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-linbit-logo.svg drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-linbit-logo.svg --- drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-linbit-logo.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-linbit-logo.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,160 @@ + + +image/svg+xml \ No newline at end of file Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-note.jpg and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-note.jpg differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-storage_ini1.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-storage_ini1.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-storage_ini2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-storage_ini2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-storage_step1.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-storage_step1.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-storage_step2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-storage_step2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-storage_step3.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-storage_step3.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-ovirt-rhel6-warning.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-ovirt-rhel6-warning.png differ diff -Nru drbd-doc-8.4~20151102/images/drbd8-proxy3-rhel6-memlimit.svg drbd-doc-8.4~20220106/images/drbd8-proxy3-rhel6-memlimit.svg --- drbd-doc-8.4~20151102/images/drbd8-proxy3-rhel6-memlimit.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd8-proxy3-rhel6-memlimit.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,71 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-sas-ssd-5.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-sas-ssd-5.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-sas-ssd-rand-rw.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-sas-ssd-rand-rw.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-sas-ssd-seq-rw.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-sas-ssd-seq-rw.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-sata-ssd-5.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-sata-ssd-5.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-sata-ssd-intel-sata-ssd.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-sata-ssd-intel-sata-ssd.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-sata-ssd-rand-rw.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-sata-ssd-rand-rw.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd8-sata-ssd-seq-rw.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd8-sata-ssd-seq-rw.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-azure-ha-clustering-primer-lb-backend-pool.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-azure-ha-clustering-primer-lb-backend-pool.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-azure-ha-clustering-primer-lb-health-probe.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-azure-ha-clustering-primer-lb-health-probe.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-azure-ha-clustering-primer-lb-rules.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-azure-ha-clustering-primer-lb-rules.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-azure-ha-clustering-primer-lb-setup.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-azure-ha-clustering-primer-lb-setup.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-azure-ha-clustering-primer-network-security-group-inbound-rules.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-azure-ha-clustering-primer-network-security-group-inbound-rules.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-azure-ha-clustering-primer-network-security-group.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-azure-ha-clustering-primer-network-security-group.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-azure-ha-clustering-primer-virtual-machine-overview.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-azure-ha-clustering-primer-virtual-machine-overview.png differ diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-3ware+linbit-logo.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-3ware+linbit-logo.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-3ware+linbit-logo.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-3ware+linbit-logo.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,457 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-3warelogo.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-3warelogo.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-3warelogo.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-3warelogo.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,261 @@ + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-env.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-env.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-env.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-env.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,4265 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + /dev/drbd100 + DRBD + Ceph + /dev/rbd0 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 10Gbps + 10Gbps + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-rd-bw.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-rd-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-rd-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-rd-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,418 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-rd-iops.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-rd-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-rd-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-rd-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,365 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-wr-bw.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-wr-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-wr-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-wr-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,412 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-wr-iops.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-wr-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-rnd-wr-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-rnd-wr-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,365 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-rd-bw.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-rd-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-rd-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-rd-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,405 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-rd-iops.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-rd-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-rd-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-rd-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,358 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-wr-bw.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-wr-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-wr-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-wr-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,409 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-wr-iops.svg drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-wr-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-ceph-comparison-seq-wr-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-ceph-comparison-seq-wr-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,358 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.mixed2.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.mixed2.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.mixed2.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.mixed2.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,445 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.read2.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.read2.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.read2.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.read2.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,446 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.read.t1.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.read.t1.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.read.t1.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.read.t1.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,446 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.read.t8.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.read.t8.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.read.t8.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.read.t8.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,450 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.write.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.write.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.write.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.write.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,482 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.write.t8.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.write.t8.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-drbd.write.t8.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-drbd.write.t8.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,496 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-hgst.read.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-hgst.read.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-hgst.read.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-hgst.read.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,442 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-hgst.write.t1.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-hgst.write.t1.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-hgst.write.t1.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-hgst.write.t1.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,481 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-hgst.write.t8.svg drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-hgst.write.t8.svg --- drbd-doc-8.4~20151102/images/drbd9-infiniband-hgst-hgst.write.t8.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-infiniband-hgst-hgst.write.t8.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,461 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-kvm-rhel8-ss-1.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-kvm-rhel8-ss-1.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-kvm-rhel8-ss-2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-kvm-rhel8-ss-2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-kvm-rhel8-ss-3.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-kvm-rhel8-ss-3.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/drbd9-kvm-rhel8-ss-4.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/drbd9-kvm-rhel8-ss-4.png differ diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,392 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,380 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,360 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,370 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,392 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,380 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,360 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-rd-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-rd-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,313 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,369 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,362 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,337 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,344 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,369 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,362 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,337 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-rnd-wr-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-rnd-wr-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,287 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,386 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,373 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,371 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,370 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,386 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,373 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,371 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-rd-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-rd-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,313 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,369 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,350 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,343 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,344 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,369 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,350 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,343 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-1-seq-wr-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-1-seq-wr-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,287 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,380 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,369 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,357 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,361 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,380 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,369 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,357 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-rd-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-rd-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,322 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,371 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,377 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,341 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,344 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,371 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,377 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,341 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-rnd-wr-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-rnd-wr-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,294 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,385 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,384 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,374 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,361 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,385 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,384 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,374 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-rd-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-rd-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,322 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,377 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,354 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,346 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-bw.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-bw.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,344 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.basic.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.basic.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.basic.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.basic.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,377 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-io.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-io.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-io.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-io.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,354 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-mb.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-mb.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-mb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.cpu.per-mb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,346 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-4-seq-wr-iops.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-4-seq-wr-iops.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,294 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-rnd-rd-lat-4.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-rnd-rd-lat-4.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-rnd-rd-lat-4.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-rnd-rd-lat-4.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,502 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-rnd-wr-lat-4.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-rnd-wr-lat-4.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-rnd-wr-lat-4.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-rnd-wr-lat-4.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,496 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-seq-rd-lat-4.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-seq-rd-lat-4.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-seq-rd-lat-4.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-seq-rd-lat-4.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,512 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-seq-wr-lat-4.svg drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-seq-wr-lat-4.svg --- drbd-doc-8.4~20151102/images/drbd9-openstack-comparison-seq-wr-lat-4.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd9-openstack-comparison-seq-wr-lat-4.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,496 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd-in-kernel.svg drbd-doc-8.4~20220106/images/drbd-in-kernel.svg --- drbd-doc-8.4~20151102/images/drbd-in-kernel.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd-in-kernel.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,1466 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + FILE SYSTEM + PAGE CACHE + I/O SCHEDULER + DISK DRIVER + RAW DEVICE + NETWORK STACK + NIC DRIVER + SERVICE + RAW DEVICE + NETWORK STACK + NIC DRIVER + DISK DRIVER + I/O SCHEDULER + FILE SYSTEM + PAGE CACHE + SERVICE + + + + diff -Nru drbd-doc-8.4~20151102/images/drbdmanage-venn.svg drbd-doc-8.4~20220106/images/drbdmanage-venn.svg --- drbd-doc-8.4~20151102/images/drbdmanage-venn.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbdmanage-venn.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,1143 @@ + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd-pacemaker-floating-peers.svg drbd-doc-8.4~20220106/images/drbd-pacemaker-floating-peers.svg --- drbd-doc-8.4~20151102/images/drbd-pacemaker-floating-peers.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd-pacemaker-floating-peers.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,1171 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Master slide + + Date/Time + + + + + + + + + + Footer + + + + + + + + + + + Slide + + Drawing + + + + + + + + + Pacemaker Cluster + + + (local) + + + + + + + + Drawing + + + + + + + + + Pacemaker Cluster + + + (remote) + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + + + + + + + left + + + (alternate) + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + Shared + + + Storage + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + Shared + + + Storage + + + + + + + + Drawing + + + + + + + + + + + + left + + + + + + + + Drawing + + + + + + + + + + + + right + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + + + + + + + right + + + (alternate) + + + + + + + + Drawing + + + + + + + + + + + + Drawing + + + + + + + + + + + Drawing + + + + + + + + + + + Drawing + + + + + + + + + + + + + Drawing + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd-plus-logo.svg drbd-doc-8.4~20220106/images/drbd-plus-logo.svg --- drbd-doc-8.4~20151102/images/drbd-plus-logo.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd-plus-logo.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,64 @@ + + + +image/svg+xml \ No newline at end of file diff -Nru drbd-doc-8.4~20151102/images/drbd-resource-stacking-pacemaker-3nodes.svg drbd-doc-8.4~20220106/images/drbd-resource-stacking-pacemaker-3nodes.svg --- drbd-doc-8.4~20151102/images/drbd-resource-stacking-pacemaker-3nodes.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd-resource-stacking-pacemaker-3nodes.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,1137 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Master slide + + Date/Time + + + + + + + + + + Footer + + + + + + + + + + + Slide + + Drawing + + + + + + + + + Pacemaker Cluster + + + (local) + + + + + + + + Drawing + + + + + + + + + Stand-Alone Node + + + (remote) + + + + + + + + Drawing + + + + + + + + + + + + left + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + alice + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + bob + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + charlie + + + + + + + + Drawing + + + + + + + + + + + + left + + + + + + + + Drawing + + + + + + + + + + + + stacked + + + + + + + + Drawing + + + + + + + + + + + + stacked + + + + + + + + Drawing + + + + + + + + + + + + Drawing + + + + + + + + + + + + Drawing + + + + + + + + + + + + + Drawing + + + + + + + + + + + + + Drawing + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd-resource-stacking-pacemaker-4nodes.svg drbd-doc-8.4~20220106/images/drbd-resource-stacking-pacemaker-4nodes.svg --- drbd-doc-8.4~20151102/images/drbd-resource-stacking-pacemaker-4nodes.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd-resource-stacking-pacemaker-4nodes.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,765 @@ + + + + + + + Slide + + Drawing + + + + + + + + + Pacemaker Cluster + + + (local) + + + + + + + + Drawing + + + + + + + + + Pacemaker Cluster + + + (remote) + + + + + + + + Drawing + + + + + + + + + + + + left + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + alice + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + bob + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + charlie + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + daisy + + + + + + + + Drawing + + + + + + + + + + + + left + + + + + + + + Drawing + + + + + + + + + + + + right + + + + + + + + Drawing + + + + + + + + + + + + right + + + + + + + + Drawing + + + + + + + + + + + + stacked + + + + + + + + Drawing + + + + + + + + + + + + stacked + + + + + + + + Drawing + + + + + + + + + + + + Drawing + + + + + + + + + + + + Drawing + + + + + + + + + + + + Drawing + + + + + + + + + + + + + Drawing + + + + + + + + + + + + + Drawing + + + + + + + + + + + + + Drawing + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/drbd-resource-stacking.svg drbd-doc-8.4~20220106/images/drbd-resource-stacking.svg --- drbd-doc-8.4~20151102/images/drbd-resource-stacking.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/drbd-resource-stacking.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,6525 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + Group + + + + + + + + + + + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + Primary + + + + + + + + Drawing + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Backup + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Secondary + + + + + + C + + + Drawing + + + + + + Upper layer + + + + + + + + Drawing + + + + + + Lower layer + + + + + + + + + + + + + + + A + + + Group + + + + + + + + + + + + + + + + + + + Group + + + + + + + + + + + + + + + + + + + Group + + + + + + + + + + + + + + + + + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-arch-diagram.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-arch-diagram.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-grafana-dash-0.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-grafana-dash-0.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-lt-create-0.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-lt-create-0.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-lt-create-1.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-lt-create-1.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-lt-create-2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-lt-create-2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-lt-create-3.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-lt-create-3.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-perf-graphs.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-perf-graphs.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-validation-0.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-validation-0.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-validation-2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-validation-2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/eks-linstor-validation-4.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/eks-linstor-validation-4.png differ diff -Nru drbd-doc-8.4~20151102/images/gi-changes-newgen.svg drbd-doc-8.4~20220106/images/gi-changes-newgen.svg --- drbd-doc-8.4~20151102/images/gi-changes-newgen.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/gi-changes-newgen.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,510 @@ + + + + + + + + + + image/svg+xml + + + + + + + Drawing + + + + + + + + + + + + Historical ... + + + + + + + + Drawing + + + + + + + + + + + + Historical (1) + + + + + + + + Drawing + + + + + + + + + + + + Bitmap (empty) + + + + + + + + Drawing + + + + + + + + + + + + Current + + + + + + + + Drawing + + + + + + + + + + + + Historical ... + + + + + + + + Drawing + + + + + + + + + + + + Historical (1) + + + + + + + + Drawing + + + + + + + + + + + + Bitmap + + + + + + + + Drawing + + + + + + + + + + + + Current + + + + + + + + Drawing + + + + + + + Before + + + + + + + + + Drawing + + + + + + + After + + + + + + + + + Drawing + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/gi-changes-synccomplete.svg drbd-doc-8.4~20220106/images/gi-changes-synccomplete.svg --- drbd-doc-8.4~20151102/images/gi-changes-synccomplete.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/gi-changes-synccomplete.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,538 @@ + + + + + + + + + + image/svg+xml + + + + + + + Drawing + + + + + + + + + + + + Historical (2) + + + + + + + + Drawing + + + + + + + + + + + + Historical (1) + + + + + + + + Drawing + + + + + + + + + + + + Bitmap + + + + + + + + Drawing + + + + + + + + + + + + Current + + + + + + + + Drawing + + + + + + + + + + + + Historical (2) + + + + + + + + Drawing + + + + + + + + + + + + Historical (1) + + + + + + + + Drawing + + + + + + + + + + + + Bitmap (empty) + + + + + + + + Drawing + + + + + + + + + + + + Current + + + + + + + + Drawing + + + + + + + Before + + + + + + + + + Drawing + + + + + + + After + + + + + + + + + Drawing + + + + + + + + + + + + Drawing + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/gi-changes-syncstart.svg drbd-doc-8.4~20220106/images/gi-changes-syncstart.svg --- drbd-doc-8.4~20151102/images/gi-changes-syncstart.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/gi-changes-syncstart.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,510 @@ + + + + + + + + + + image/svg+xml + + + + + + + Drawing + + + + + + + + + + + + Historical (2) + + + + + + + + Drawing + + + + + + + + + + + + Historical (1) + + + + + + + + Drawing + + + + + + + + + + + + Bitmap + + + + + + + + Drawing + + + + + + + + + + + + Current + + + + + + + + Drawing + + + + + + + + + + + + Historical (2) + + + + + + + + Drawing + + + + + + + + + + + + Historical (1) + + + + + + + + Drawing + + + + + + + + + + + + Bitmap + + + + + + + + Drawing + + + + + + + + + + + + Current + + + + + + + + Drawing + + + + + + + Before + + + + + + + + + Drawing + + + + + + + After + + + + + + + + + Drawing + + + + + + + + + + + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-add-pool-icon.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-add-pool-icon.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-storage-pool-1.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-storage-pool-1.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-storage-pool-2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-storage-pool-2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-1.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-1.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-3.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-3.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-4.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-4.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-5.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-5.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-6.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-6.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-7.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-7.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/iscsi-kvm-nodist-screenshot-new-vm-8.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/iscsi-kvm-nodist-screenshot-new-vm-8.png differ diff -Nru drbd-doc-8.4~20151102/images/linbit-dolphin-logo.svg drbd-doc-8.4~20220106/images/linbit-dolphin-logo.svg --- drbd-doc-8.4~20151102/images/linbit-dolphin-logo.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/linbit-dolphin-logo.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,339 @@ + + + +image/svg+xml \ No newline at end of file diff -Nru drbd-doc-8.4~20151102/images/linbit-hcomb.svg drbd-doc-8.4~20220106/images/linbit-hcomb.svg --- drbd-doc-8.4~20151102/images/linbit-hcomb.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/linbit-hcomb.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1 @@ +LinBit_Comb_3 \ No newline at end of file diff -Nru drbd-doc-8.4~20151102/images/linbit-logo-2017.svg drbd-doc-8.4~20220106/images/linbit-logo-2017.svg --- drbd-doc-8.4~20151102/images/linbit-logo-2017.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/linbit-logo-2017.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,37 @@ + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/linbit-logo.svg drbd-doc-8.4~20220106/images/linbit-logo.svg --- drbd-doc-8.4~20151102/images/linbit-logo.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/linbit-logo.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,204 @@ + + + +image/svg+xml \ No newline at end of file Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/linstor-exos-integration.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/linstor-exos-integration.png differ diff -Nru drbd-doc-8.4~20151102/images/lvm.svg drbd-doc-8.4~20220106/images/lvm.svg --- drbd-doc-8.4~20151102/images/lvm.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/lvm.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,70 @@ + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + LV + sLV + Volume Group (VG) + PV + LV + sLV + PV + PV + + + + + + + + + + + + + + ... + ... + diff -Nru drbd-doc-8.4~20151102/images/Makefile.am drbd-doc-8.4~20220106/images/Makefile.am --- drbd-doc-8.4~20151102/images/Makefile.am 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/images/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,12 +0,0 @@ -# Some useful wildcard expansions -SVG_FILES ?= $(wildcard *.svg) - -if RENDER_SVG -.PHONY: png -png: $(SVG_FILES:.svg=.png) -endif - -%: force - @$(MAKE) -f $(top_srcdir)/Makefile $@ - -force: ; diff -Nru drbd-doc-8.4~20151102/images/memlimit.svg drbd-doc-8.4~20220106/images/memlimit.svg --- drbd-doc-8.4~20151102/images/memlimit.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/memlimit.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,71 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/metadata-size-approx.svg drbd-doc-8.4~20220106/images/metadata-size-approx.svg --- drbd-doc-8.4~20151102/images/metadata-size-approx.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/metadata-size-approx.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,44 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/metadata-size-exact.svg drbd-doc-8.4~20220106/images/metadata-size-exact.svg --- drbd-doc-8.4~20151102/images/metadata-size-exact.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/metadata-size-exact.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,43 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/nagios.svg drbd-doc-8.4~20220106/images/nagios.svg --- drbd-doc-8.4~20151102/images/nagios.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/nagios.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,37 @@ + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case1a.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case1a.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case1a.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case1a.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,324 @@ + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + A + + B + + C + + + + + + + + + + + + + A + + B + + C + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case2a.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case2a.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case2a.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case2a.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,341 @@ + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + A + + B + + C + + + + + + + + + + + A + + B + + C + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case2.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case2.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case2.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case2.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,175 @@ + + + + + + + + + + + + + + image/svg+xml + + + + + + + A + + B + + C + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case3.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case3.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect-case3.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect-case3.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,175 @@ + + + + + + + + + + + + + + image/svg+xml + + + + + + + A + + B + + C + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker-disconnect.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker-disconnect.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,156 @@ + + + + + + + + + + + + + + image/svg+xml + + + + + + + A + + B + + C + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,141 @@ + + + + + + + + + + + + image/svg+xml + + + + + + + + A + + + + B + + + C + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker-without-disconnect.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker-without-disconnect.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker-without-disconnect.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker-without-disconnect.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,153 @@ + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + A + + B + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/quorum-tiebreaker-without.svg drbd-doc-8.4~20220106/images/quorum-tiebreaker-without.svg --- drbd-doc-8.4~20151102/images/quorum-tiebreaker-without.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/quorum-tiebreaker-without.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,114 @@ + + + + + + + + + + + + image/svg+xml + + + + + + + + A + + + + B + + + + + diff -Nru drbd-doc-8.4~20151102/images/rebalance.svg drbd-doc-8.4~20220106/images/rebalance.svg --- drbd-doc-8.4~20151102/images/rebalance.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/rebalance.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,422 @@ + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + Storage Node 1 +   + + + + + Storage Node 2 + + + + + Storage Node 3 + + + + + Storage Node 1 + + + + + Storage Node 2 + + + + + Storage Node 3 + + + + + Storage Node 4 + + diff -Nru drbd-doc-8.4~20151102/images/resync-time.svg drbd-doc-8.4~20220106/images/resync-time.svg --- drbd-doc-8.4~20151102/images/resync-time.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/resync-time.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,29 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-0.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-0.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-10.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-10.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-11.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-11.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-1.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-1.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-2.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-2.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-3.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-3.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-4.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-4.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-5.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-5.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-6.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-6.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-7.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-7.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-8.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-8.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-9.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-9.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/sap-aws-ra.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/sap-aws-ra.png differ diff -Nru drbd-doc-8.4~20151102/images/satellitecluster.svg drbd-doc-8.4~20220106/images/satellitecluster.svg --- drbd-doc-8.4~20151102/images/satellitecluster.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/satellitecluster.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,614 @@ + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +   + Current leader + DRBD 9protocolcontrolvolume + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/Screenshot_20170620_145416.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/Screenshot_20170620_145416.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/Screenshot_20170620_150005.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/Screenshot_20170620_150005.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/Screenshot_20170620_150241.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/Screenshot_20170620_150241.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/Screenshot_20170620_152241.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/Screenshot_20170620_152241.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/Screenshot_20170620_152306.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/Screenshot_20170620_152306.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/Screenshot_20170620_152714.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/Screenshot_20170620_152714.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/Screenshot_20170620_153624.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/Screenshot_20170620_153624.png differ diff -Nru drbd-doc-8.4~20151102/images/single-stacked.svg drbd-doc-8.4~20220106/images/single-stacked.svg --- drbd-doc-8.4~20151102/images/single-stacked.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/single-stacked.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,693 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + +   + + Storage + + + + + + + + + + + + + + + + + + + + + + + + + Storage + + + + + + + + + + + + + + + + + + + + + + + + + Storage + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + HA-Cluster + Backup +   + Application writes(filesystem) + + r0 + r0-U + + + diff -Nru drbd-doc-8.4~20151102/images/sync-rate-example1.svg drbd-doc-8.4~20220106/images/sync-rate-example1.svg --- drbd-doc-8.4~20151102/images/sync-rate-example1.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/sync-rate-example1.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,36 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff -Nru drbd-doc-8.4~20151102/images/sync-rate-example2.svg drbd-doc-8.4~20220106/images/sync-rate-example2.svg --- drbd-doc-8.4~20151102/images/sync-rate-example2.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/sync-rate-example2.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,37 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/vdo-cp-a.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/vdo-cp-a.png differ diff -Nru drbd-doc-8.4~20151102/images/vsan-architecture.svg drbd-doc-8.4~20220106/images/vsan-architecture.svg --- drbd-doc-8.4~20151102/images/vsan-architecture.svg 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/images/vsan-architecture.svg 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,3 @@ + + +
Hypervisor 01
Hypervisor 01
Hypervisor N
Hypervisor N
HDD
HDD
HDD
HDD
SSD
SSD
NVMe
NVMe
SSD
SSD
NVMe
NVMe
LINBIT VSAN
Appliance 01
LINBIT VSAN...
LINBIT VSAN
Appliance N
LINBIT VSAN...
VM 01
VM 01
VM 02
VM 02
VM 03
VM 03
VM 04
VM 04
iSCSI
iSCSI
iSCSI
iSCSI
iSCSI
iSCSI
iSCSI
iSCSI
Viewer does not support full SVG 1.1
\ No newline at end of file Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/vsan-connect-automatically.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/vsan-connect-automatically.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-accept-license.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-accept-license.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-asking-for-format.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-asking-for-format.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-control-panel-system-and-security.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-control-panel-system-and-security.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-disk-management.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-disk-management.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-firewall-advanced-settings.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-firewall-advanced-settings.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-firewall-allow-the-connection.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-firewall-allow-the-connection.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-firewall-enter-port.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-firewall-enter-port.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-firewall-name.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-firewall-name.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-firewall.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-firewall.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-firewall-profiles.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-firewall-profiles.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-firewall-select-port-type.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-firewall-select-port-type.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-format.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-format.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-gparted-create-partition.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-gparted-create-partition.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-gparted-create-partition-table.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-gparted-create-partition-table.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-partition-drive-letter.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-partition-drive-letter.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-partition-enter-size.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-partition-enter-size.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-partition-finish.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-partition-finish.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-partition-format.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-partition-format.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-ready-to-install.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-ready-to-install.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-reboot.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-reboot.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-run-installer.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-run-installer.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-select-language.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-select-language.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-upgrade-windrbd.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-upgrade-windrbd.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-virtualbox-create-disk.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-virtualbox-create-disk.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-virtualbox-create-vm.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-virtualbox-create-vm.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-virtualbox-disable-dhcp-server.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-virtualbox-disable-dhcp-server.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-virtualbox-insert-iso.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-virtualbox-insert-iso.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-virtualbox-network.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-virtualbox-network.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-virtualbox-set-ram.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-virtualbox-set-ram.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-virtualbox-set-writethrough.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-virtualbox-set-writethrough.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-windows-install-bus-device.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-windows-install-bus-device.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-windows-regedit.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-windows-regedit.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-windows-regedit-syslog.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-windows-regedit-syslog.png differ Binary files /tmp/tmp0ct44hw1/f2I6DoUb8p/drbd-doc-8.4~20151102/images/windrbd-windows-set-ip-address.png and /tmp/tmp0ct44hw1/FYtFws2Vjd/drbd-doc-8.4~20220106/images/windrbd-windows-set-ip-address.png differ diff -Nru drbd-doc-8.4~20151102/makedoc/autogen.sh drbd-doc-8.4~20220106/makedoc/autogen.sh --- drbd-doc-8.4~20151102/makedoc/autogen.sh 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/autogen.sh 1970-01-01 00:00:00.000000000 +0000 @@ -1,17 +0,0 @@ -#!/bin/sh -# Run this to generate all the initial makefiles, etc. -set -e - -AMFILES=`find -name 'Makefile.am' -printf '%p ' | sed -e 's,\./\([^ ]*\)\.am ,\1 ,g'` -INFILES=`find -name '*.in' -not -name 'Makefile.in' -printf '%p ' | sed -e 's,\./\([^ ]*\)\.in ,\1 ,g'` - -cat configure.ac.stub - > configure.ac < - Copyright (C) - - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License along - with this program; if not, write to the Free Software Foundation, Inc., - 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this -when it starts in an interactive mode: - - Gnomovision version 69, Copyright (C) year name of author - Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, the commands you use may -be called something other than `show w' and `show c'; they could even be -mouse-clicks or menu items--whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - - Yoyodyne, Inc., hereby disclaims all copyright interest in the program - `Gnomovision' (which makes passes at compilers) written by James Hacker. - - , 1 April 1989 - Ty Coon, President of Vice - -This General Public License does not permit incorporating your program into -proprietary programs. If your program is a subroutine library, you may -consider it more useful to permit linking proprietary applications with the -library. If this is what you want to do, use the GNU Lesser General -Public License instead of this License. diff -Nru drbd-doc-8.4~20151102/makedoc/fonts/Makefile.am drbd-doc-8.4~20220106/makedoc/fonts/Makefile.am --- drbd-doc-8.4~20151102/makedoc/fonts/Makefile.am 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/fonts/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,9 +0,0 @@ -# Some useful wildcard expansions -TTF_FILES ?= $(wildcard *.ttf) - -if FONT_METRICS_TTF -xml: $(TTF_FILES:.ttf=.xml) - -%.xml: %.ttf - $(FOP_TTFREADER) $< $@ -endif diff -Nru drbd-doc-8.4~20151102/makedoc/images/caution.svg drbd-doc-8.4~20220106/makedoc/images/caution.svg --- drbd-doc-8.4~20151102/makedoc/images/caution.svg 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/images/caution.svg 1970-01-01 00:00:00.000000000 +0000 @@ -1,25 +0,0 @@ - - - - - - - - -]> - - - - - - - - - - - - - - - diff -Nru drbd-doc-8.4~20151102/makedoc/images/important.svg drbd-doc-8.4~20220106/makedoc/images/important.svg --- drbd-doc-8.4~20151102/makedoc/images/important.svg 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/images/important.svg 1970-01-01 00:00:00.000000000 +0000 @@ -1,25 +0,0 @@ - - - - - - - - -]> - - - - - - - - - - - - - - - diff -Nru drbd-doc-8.4~20151102/makedoc/images/Makefile.am drbd-doc-8.4~20220106/makedoc/images/Makefile.am --- drbd-doc-8.4~20151102/makedoc/images/Makefile.am 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/images/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,11 +0,0 @@ -# Some useful wildcard expansions -SVG_FILES ?= $(wildcard *.svg) - -if RENDER_SVG -png: $(SVG_FILES:.svg=.png) -endif - -%: force - @$(MAKE) -f $(top_srcdir)/Makefile $@ - -force: ; diff -Nru drbd-doc-8.4~20151102/makedoc/images/note.svg drbd-doc-8.4~20220106/makedoc/images/note.svg --- drbd-doc-8.4~20151102/makedoc/images/note.svg 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/images/note.svg 1970-01-01 00:00:00.000000000 +0000 @@ -1,33 +0,0 @@ - - - - - - - - - - - - -]> - - - - - - - - - - - - - - - - - - - diff -Nru drbd-doc-8.4~20151102/makedoc/images/tip.svg drbd-doc-8.4~20220106/makedoc/images/tip.svg --- drbd-doc-8.4~20151102/makedoc/images/tip.svg 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/images/tip.svg 1970-01-01 00:00:00.000000000 +0000 @@ -1,31 +0,0 @@ - - - - - - - - -]> - - - - - - - - - - - - - - - - - - - - - diff -Nru drbd-doc-8.4~20151102/makedoc/images/warning.svg drbd-doc-8.4~20220106/makedoc/images/warning.svg --- drbd-doc-8.4~20151102/makedoc/images/warning.svg 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/images/warning.svg 1970-01-01 00:00:00.000000000 +0000 @@ -1,23 +0,0 @@ - - - - - - - - -]> - - - - - - - - - - - - - diff -Nru drbd-doc-8.4~20151102/makedoc/Makefile.am drbd-doc-8.4~20220106/makedoc/Makefile.am --- drbd-doc-8.4~20151102/makedoc/Makefile.am 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,403 +0,0 @@ -# This is the top-level Makefile. When creating a subdirectory for a -# new howto, create a new Makefile in that directory, referring back -# to this one. Like so: -# -# %: force -# @$(MAKE) -f $(top_srcdir)/Makefile $@ -# force: ; -# -# That target should go after all other targets you define in your -# lower-level Makefile. - -##################################################################### -# Make variables -##################################################################### - -# Paths to Norm Walsh's DocBook XSL stylesheets. -# Fetching these from the web on every run is probably dead slow, so -# make sure you have a local copy of these stylesheets installed, and -# XML catalogs set up correctly. On Debian/Ubuntu systems, this is a -# simple matter of "apt-get install docbook-xsl". -if PROFILING -if HTML_CHUNK -HTML_STYLESHEET ?= $(STYLESHEET_PREFIX)/$(HTML_FLAVOR)/profile-chunk.xsl -else -HTML_STYLESHEET ?= $(STYLESHEET_PREFIX)/$(HTML_FLAVOR)/profile-docbook.xsl -endif -else -if HTML_CHUNK -HTML_STYLESHEET ?= $(STYLESHEET_PREFIX)/$(HTML_FLAVOR)/chunk.xsl -else -HTML_STYLESHEET ?= $(STYLESHEET_PREFIX)/$(HTML_FLAVOR)/docbook.xsl -endif -endif - -# Stylesheet for creating a titlepage stylesheet from titlepage -# template -TITLEPAGE_STYLESHEET ?= $(STYLESHEET_PREFIX)/template/titlepage.xsl - -# For PDF output, define some variables to be used for -# FO-to-PDF transformation -if USE_FO_TITLEPAGE -if PROFILING -FO_STYLESHEET ?= $(abs_srcdir)/stylesheets/fo/profile-docbook.xsl -else -FO_STYLESHEET ?= $(abs_srcdir)/stylesheets/fo/docbook.xsl -endif -else -if PROFILING -FO_STYLESHEET ?= $(STYLESHEET_PREFIX)/fo/profile-docbook.xsl -else -FO_STYLESHEET ?= $(STYLESHEET_PREFIX)/fo/docbook.xsl -endif -endif - -##################################################################### -# Command line option sets for invoked programs -##################################################################### -XSLTPROC_OPTIONS = --xinclude \ - --param admon.graphics 1 \ - --stringparam l10n.gentext.default.language $(DEFAULT_LANGUAGE) -if DRAFT_MODE -XSLTPROC_OPTIONS += --stringparam draft.mode yes -endif -if COMMENTS -XSLTPROC_OPTIONS += --param show.comments 1 -else -XSLTPROC_OPTIONS += --param show.comments 0 -endif -if SYNTAX_HIGHLIGHTING -XSLTPROC_OPTIONS += --param highlight.source 1 -endif -if SECTION_NUMBERS -XSLTPROC_OPTIONS += --param section.autolabel 1 \ - --param section.autolabel.max.depth $(SECTION_NUMBER_DEPTH) \ - --param section.label.includes.component.label 1 -endif - -# xsltproc options for HTML output -XSLTPROC_HTML_OPTIONS = $(XSLTPROC_OPTIONS) \ - --param use.id.as.filename 1 \ - --param generate.index 0 \ - --stringparam admon.graphics.path images/ \ - --stringparam admon.graphics.extension .png \ - --stringparam ulink.target $(ULINK_TARGET) \ - --stringparam html.stylesheet $(CSS) \ - --stringparam graphic.default.extension png -if PROFILING -XSLTPROC_HTML_OPTIONS += $(XSLTPROC_PROFILING_OPTIONS) -endif -if HTML_IMAGE_SCALING -XSLTPROC_HTML_OPTIONS += --param ignore.image.scaling 0 -else -XSLTPROC_HTML_OPTIONS += --param ignore.image.scaling 1 -endif -if HTML_CHUNK -XSLTPROC_HTML_OPTIONS += --param chunk.section.depth $(HTML_CHUNK_DEPTH) -endif - -# xsltproc options for FO output -XSLTPROC_FO_OPTIONS = $(XSLTPROC_OPTIONS) \ - --stringparam paper.type $(PAPER_SIZE) \ - --stringparam body.font.family "$(FO_BODY_FONT_FAMILY)" \ - --stringparam title.font.family "$(FO_TITLE_FONT_FAMILY)" \ - --stringparam graphic.default.extension svg \ - --stringparam generate.toc "$(FO_TOC)" \ - --param use.extensions 1 \ - --param tablecolumns.extension 0 \ - --stringparam admon.graphics.path $(abs_srcdir)/images/ \ - --stringparam admon.graphics.extension .svg -if PROFILING -XSLTPROC_FO_OPTIONS += $(XSLTPROC_PROFILING_OPTIONS) -endif -if USE_FOP -XSLTPROC_FO_OPTIONS += --param fop1.extensions 1 -endif -if PDF_XREF_PAGENUM -XSLTPROC_FO_OPTIONS += - --stringparam insert.link.page.number yes \ - --stringparam insert.xref.page.number yes -endif - -# Saxon parameters -SAXON_PARAMS = admon.graphics=1 \ - l10n.gentext.default.language='$(DEFAULT_LANGUAGE)' -if DRAFT_MODE -SAXON_PARAMS += draft.mode='yes' -endif -if COMMENTS -SAXON_PARAMS += show.comments=1 -else -SAXON_PARAMS += show.comments=0 -endif -if SYNTAX_HIGHLIGHTING -SAXON_PARAMS += highlight.source=1 -endif -if SECTION_NUMBERS -SAXON_PARAMS += section.autolabel=1 \ - section.autolabel.max.depth=$(SECTION_NUMBER_DEPTH) \ - section.label.includes.component.label=1 -endif - -SAXON_HTML_PARAMS = $(SAXON_PARAMS) \ - use.id.as.filename=1 \ - generate.index=0 \ - admon.graphics.path='images/' \ - admon.graphics.extension='.png' \ - ulink.target='$(ULINK_TARGET)' \ - html.stylesheet='$(CSS)' \ - graphic.default.extension='png' -if HTML_IMAGE_SCALING -SAXON_HTML_PARAMS += ignore.image.scaling=0 -else -SAXON_HTML_PARAMS += ignore.image.scaling=1 -endif -if HTML_CHUNK -SAXON_HTML_PARAMS += chunk.section.depth=$(HTML_CHUNK_DEPTH) -endif - -SAXON_FO_PARAMS = $(SAXON_PARAMS) \ - paper.type='$(PAPER_SIZE)' \ - body.font.family='$(FO_BODY_FONT_FAMILY)' \ - title.font.family='$(FO_TITLE_FONT_FAMILY)' \ - graphic.default.extension='svg' \ - generate.toc='$(FO_TOC)'\ - use.extensions=1 \ - tablecolumns.extension=0 \ - admon.graphics.path='$(abs_srcdir)/images/' \ - admon.graphics.extension='.svg' -if USE_FOP -SAXON_FO_PARAMS += fop1.extensions=1 -endif -if PDF_XREF_PAGENUM -SAXON_FO_PARAMS += insert.link.page.number='yes' \ - insert.xref.page.number='yes' -endif - -# AsciiDoc options -ASCIIDOC_OPTIONS = --backend=docbook -if FORCE_ASCIIDOC_DOCTYPE -ASCIIDOC_OPTIONS += --doctype=$(ASCIIDOC_DOCTYPE) -endif -if COMMENTS -ASCIIDOC_OPTIONS += --attribute=showcomments -endif -if ENABLE_ASCIIDOC_DOCINFO -ASCIIDOC_OPTIONS += --attribute=docinfo -endif -if ENABLE_ASCIIDOC_BUILDDATE -ASCIIDOC_OPTIONS += --attribute=docdate=`date +%x` -endif - -# dblatex options -DBLATEX_OPTIONS = --backend=$(DBLATEX_BACKEND) -if DRAFT_MODE -DBLATEX_OPTIONS += -P draft.mode='yes' -endif -if COMMENTS -DBLATEX_OPTIONS += -P show.comments=1 -else -DBLATEX_OPTIONS += -P show.comments=0 -endif -if SECTION_NUMBERS -DBLATEX_OPTIONS += -P doc.section.depth=$(SECTION_NUMBER_DEPTH) -endif - -# docbook2odf options -DOCBOOK2ODF_OPTIONS = --force - -##################################################################### -# Targets for individual document types -##################################################################### - -# XML from asciidoc -if ASCIIDOC_SUPPORT -if ENABLE_ASCIIDOC_DOCINFO -%.xml: %.$(ASCIIDOC_EXTENSION) %-docinfo.xml -else -%.xml: %.$(ASCIIDOC_EXTENSION) -endif - $(ASCIIDOC) --out-file=$@ \ - $(ASCIIDOC_OPTIONS) $< - -# EPUB -if ENABLE_ASCIIDOC_DOCINFO -%.epub: %.$(ASCIIDOC_EXTENSION) %-docinfo.xml -else -%.epub: %.$(ASCIIDOC_EXTENSION) -endif - $(ASCIIDOC) --out-file=$@ \ - $(ASCIIDOC_OPTIONS) $< -endif - -# HTML (from XML) -if RENDER_HTML -%.html: %.xml -if USE_XSLTPROC - $(XSLTPROC) \ - $(XSLTPROC_HTML_OPTIONS) \ - --stringparam root.filename $* \ - --output $@ \ - $(HTML_STYLESHEET) $< -endif -if USE_SAXON - $(SAXON) \ - -o $@ \ - $< $(HTML_STYLESHEET) \ - root.filename='$*' \ - $(SAXON_HTML_PARAMS) -endif -endif - -# FO (from XML) -if XSLT -if USE_FO_TITLEPAGE -%.fo: %.xml stylesheets/fo/titlepage.xsl -else -%.fo: %.xml -endif -if USE_XSLTPROC - $(XSLTPROC) -o $@ \ - $(XSLTPROC_FO_OPTIONS) \ - $(FO_STYLESHEET) $< -endif -if USE_SAXON - $(SAXON) \ - -o $@ \ - $< $(FO_STYLESHEET) \ - $(SAXON_FO_PARAMS) -endif -endif - -# OpenDocument Text (from XML) -if RENDER_ODT -if USE_ODF_TEMPLATE -%.odt: %.odt-tmp - $(UNOCONV) -f odt -t '$(ODF_TEMPLATE)' $< -%.odt-tmp: %.xml - $(DOCBOOK2ODF) $(DOCBOOK2ODF_OPTIONS) $< --output-file $@ -else -%.odt: %.xml - $(DOCBOOK2ODF) $(DOCBOOK2ODF_OPTIONS) $< --output-file $@ -endif -endif - -# Microsoft Word compatible (from ODT) -if RENDER_DOC -%.doc: %.odt - $(UNOCONV) -f doc $< -endif - -# PDF (from FO) -if RENDER_PDF -if USE_DBLATEX -%.pdf: %.xml -else -%.pdf: %.fo -endif -if USE_FOP - $(FOP) $(FOP_OPTIONS) $< -pdf $@ -endif -if USE_XMLROFF - $(XMLROFF) --backend=$(XMLROFF_BACKEND) --format=pdf -o $@ $< -endif -if USE_DBLATEX - $(DBLATEX) $(DBLATEX_OPTIONS) --pdf --output=$@ $< -endif -endif - -# PostScript (from FO) -if RENDER_PS -if USE_DBLATEX -%.ps: %.xml -else -%.ps: %.fo -endif -if USE_FOP - $(FOP) $(FOP_OPTIONS) $< -pdf $@ -endif -if USE_XMLROFF - $(XMLROFF) --backend=$(XMLROFF_BACKEND) --format=ps -o $@ $< -endif -if USE_DBLATEX - $(DBLATEX) $(DBLATEX_OPTIONS) --ps --output=$@ $< -endif -endif - -##################################################################### -# Targets for other file types -##################################################################### - -# Generated images: SVG from MathML -# (needed for HTML output, and PDF if using FOP) -# The ugly sed hack is because batik (used by FOP) complains about -# 'svg version="1"', while 'svg version="1.0"' is OK. -if RENDER_MML -%.svg: %.mml - $(MATHMLSVG) --font-size=24 $< - $(SED) -i -e 's/ $@ - -# An empty docinfo file. Created if we have docinfo support enabled, -# and no -docinfo.xml file is found -if ENABLE_ASCIIDOC_DOCINFO -%-docinfo.xml: - echo '' > $@ -endif diff -Nru drbd-doc-8.4~20151102/makedoc/README.markdown drbd-doc-8.4~20220106/makedoc/README.markdown --- drbd-doc-8.4~20151102/makedoc/README.markdown 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/README.markdown 1970-01-01 00:00:00.000000000 +0000 @@ -1,60 +0,0 @@ -# makedoc: A document processing toolchain built on GNU Autotools - -makedoc turns [AsciiDoc](http://www.methods.co.nz/asciidoc/) or -[DocBook](http://www.docbook.org) documents into a variety of output -formats, including: - -* HTML -* PDF and PostScript -* Open Document Format (used by OpenOffice.org Writer) -* Microsoft(R) Word(tm) - -In addition, makedoc transforms - -* MathML into Scalable Vector Graphics (SVG) -* SVG into Portable Network Graphics (PNG) - -## Processing documents with makedoc - -In its simplest form, check out the makedoc tree from GitHub and drop -an AsciiDoc (`.txt`) or DocBook source document (`.xml`) into the work -directory. - -Say your AsciiDoc document is named `foobar.txt`. Then, run - -```bash -./autogen.sh -./configure -cd work -make foobar.pdf foobar.html -``` - -Then, makedoc will build a PDF and HTML representation of foobar.txt. - -## Using makedoc from your own project - -If you are integrating your own documentation project with makedoc, -follow these steps: - -* From the root of your documentation tree, create symlinks to - `autogen.sh`, `configure.ac.stub`, and `Makefile.am`, located in the root - of the makedoc checkout. -* From any of you subdirectories where you keep documentation sources, - create symlinks to `work/Makefile.am`. -* Run `autogen.sh` and `./configure` from the root of _your_ checkout. -* Change into your documentation directories and run `make - .`, like `make foo.pdf` or `make bar.html`. - -## Configuring makedoc - -Makedoc's behavior can be heavily customized to your needs. Run -./configure --help for supported options. Makedoc allows you to - -* select your PDF renderer (fop, db2latex, or xmlroff) -* customize PDF page sizes and fonts -* select your XSLT processor (xsltproc or saxon) -* select your SVG rasterizer (rsvg or inkscape) -* link custom CSS style sheets and DocBook titlepage templates -* customize section numbering and labeling -* select the document type (article, book, or manpage) -* select templates for ODF output diff -Nru drbd-doc-8.4~20151102/makedoc/stylesheets/fo/docbook.xsl.in drbd-doc-8.4~20220106/makedoc/stylesheets/fo/docbook.xsl.in --- drbd-doc-8.4~20151102/makedoc/stylesheets/fo/docbook.xsl.in 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/stylesheets/fo/docbook.xsl.in 1970-01-01 00:00:00.000000000 +0000 @@ -1,5 +0,0 @@ - - - - - diff -Nru drbd-doc-8.4~20151102/makedoc/stylesheets/fo/Makefile.am drbd-doc-8.4~20220106/makedoc/stylesheets/fo/Makefile.am --- drbd-doc-8.4~20151102/makedoc/stylesheets/fo/Makefile.am 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/stylesheets/fo/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,3 +0,0 @@ -%: force - @$(MAKE) -f $(top_srcdir)/Makefile $@ -force: ; diff -Nru drbd-doc-8.4~20151102/makedoc/stylesheets/fo/profile-docbook.xsl.in drbd-doc-8.4~20220106/makedoc/stylesheets/fo/profile-docbook.xsl.in --- drbd-doc-8.4~20151102/makedoc/stylesheets/fo/profile-docbook.xsl.in 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/stylesheets/fo/profile-docbook.xsl.in 1970-01-01 00:00:00.000000000 +0000 @@ -1,5 +0,0 @@ - - - - - diff -Nru drbd-doc-8.4~20151102/makedoc/stylesheets/Makefile.am drbd-doc-8.4~20220106/makedoc/stylesheets/Makefile.am --- drbd-doc-8.4~20151102/makedoc/stylesheets/Makefile.am 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/stylesheets/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,5 +0,0 @@ -SUBDIRS = fo - -%: force - @$(MAKE) -f $(top_srcdir)/Makefile $@ -force: ; diff -Nru drbd-doc-8.4~20151102/makedoc/work/Makefile.am drbd-doc-8.4~20220106/makedoc/work/Makefile.am --- drbd-doc-8.4~20151102/makedoc/work/Makefile.am 2014-07-06 18:13:23.000000000 +0000 +++ drbd-doc-8.4~20220106/makedoc/work/Makefile.am 1970-01-01 00:00:00.000000000 +0000 @@ -1,4 +0,0 @@ -%: force - @$(MAKE) -f $(top_srcdir)/Makefile $@ - -force: ; diff -Nru drbd-doc-8.4~20151102/Makefile drbd-doc-8.4~20220106/Makefile --- drbd-doc-8.4~20151102/Makefile 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/Makefile 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,99 @@ +# you can override on command line +lang = en + +define run-in-docker = + docker run --rm -v $$(pwd):/home/makedoc/linbit-documentation linbit-documentation /bin/sh -c 'cd ~/linbit-documentation && make $(patsubst %-docker,%,$@) lang=$(lang)' +endef + +.PHONY: README.html-docker +README.html: README.adoc + asciidoctor -n -o $@ $< + +README.html-docker: dockerimage + $(run-in-docker) + +# +# po4a v0.54 is required to make build ja adoc files. +# +define dockerfile= +FROM debian:buster +MAINTAINER Roland Kammerer +ADD /GNUmakefile /linbit-documentation/GNUmakefile +RUN groupadd --gid $(shell id -g) makedoc +RUN useradd -m -u $(shell id -u) -g $(shell id -g) makedoc +RUN apt-get update && apt-get install -y make inkscape ruby po4a patch openssh-client lftp curl unzip +RUN gem install --pre asciidoctor-pdf +RUN gem install --pre asciidoctor-pdf-cjk +RUN gem install asciidoctor-pdf-cjk-kai_gen_gothic && asciidoctor-pdf-cjk-kai_gen_gothic-install +RUN curl https://packages.linbit.com/public/genshingothic-20150607.zip > /tmp/ja.zip && (mkdir /linbit-documentation/genshingothic-fonts && cd /linbit-documentation/genshingothic-fonts && unzip /tmp/ja.zip); rm /tmp/ja.zip +USER makedoc +RUN mkdir /home/makedoc/.ssh && chmod 700 /home/makedoc/.ssh && ssh-keygen -f /home/makedoc/.ssh/id_rsa -t rsa -N '' && cat /home/makedoc/.ssh/id_rsa.pub +endef + +export dockerfile +Dockerfile: + @echo "$$dockerfile" > $@ + +.PHONY: dockerimage +dockerimage: Dockerfile + if ! docker images --format={{.Repository}}:{{.Tag}} | grep -q 'linbit-documentation:latest'; then \ + test -f GNUmakefile || echo 'include Makefile' > GNUmakefile ; \ + docker build -t linbit-documentation . ; \ + fi + +# UG 9 +.PHONY: UG9-pdf-finalize UG9-pdf-finalize-docker UG9-html-finalize UG9-html-finalize-docker UG9-pot UG9-pot-docker +UG9-pdf-finalize: + make -C UG9 pdf-finalize lang=$(lang) + +UG9-pdf-finalize-docker: dockerimage + $(run-in-docker) + +UG9-html-finalize: + make -C UG9 html-finalize lang=$(lang) + +UG9-html-finalize-docker: dockerimage + $(run-in-docker) + +UG9-pot: + make -C UG9 pot lang=en + +UG9-pot-docker: dockerimage + $(run-in-docker) + +# UG 8.4 +.PHONY: UG8.4-pdf-finalize UG8.4-pdf-finalize-docker UG8.4-html-finalize UG8.4-html-finalize-docker UG8.4-pot UG8.4-pot-docker +UG8.4-pdf-finalize: + make -C UG8.4 pdf-finalize lang=$(lang) + +UG8.4-pdf-finalize-docker: dockerimage + $(run-in-docker) + +UG8.4-html-finalize: + make -C UG8.4 html-finalize lang=$(lang) + +UG8.4-html-finalize-docker: dockerimage + $(run-in-docker) + +UG8.4-pot: + make -C UG8.4 pot lang=en + +UG8.4-pot-docker: dockerimage + $(run-in-docker) + +## targets you can only use if you cloned the according *private* project +# tech guides +.PHONY: tech-guides-pdf-finalize tech-guides-pdf-finalize-docker +tech-guides-pdf-finalize: + make -C tech-guides pdf-finalize + +tech-guides-pdf-finalize-docker: dockerimage + $(run-in-docker) + +.PHONY: clean clean-all +clean: + $(warning this target is reserved, maybe you look for clean-all) + +clean-all: + make -C UG8.4 clean-all + make -C UG9 clean-all diff -Nru drbd-doc-8.4~20151102/man-pages/Makefile drbd-doc-8.4~20220106/man-pages/Makefile --- drbd-doc-8.4~20151102/man-pages/Makefile 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/man-pages/Makefile 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,132 @@ +OUTDIR=output + +OUTDIRPDF=$(OUTDIR)-pdf +OUTDIRHTML=$(OUTDIR)-html + +OUTDIRPDFFINAL=$(OUTDIRPDF)-finalize +OUTDIRHTMLFINAL=$(OUTDIRHTML)-finalize + +ifeq ($(DRBD_UTILS),) +$(error "point DRBD_UTILS to a compiled checkout INCLUDING GENERATED man pages of drbd-utils") +else +MANDIR=$(DRBD_UTILS)/documentation +endif + +ifneq ($(lang),en) +DRBD_UTILS_LANG=$(lang) +endif + +ifeq ($(lang),) +MAN_LANG=en +else +MAN_LANG=$(lang) +endif + +MANDIR_v9=$(MANDIR)/$(DRBD_UTILS_LANG)/v9 +MANDIR_v84=$(MANDIR)/$(DRBD_UTILS_LANG)/v84 + +MANPAGES_v9_5=$(wildcard $(MANDIR_v9)/*.5) +MANPAGES_v9_7=$(wildcard $(MANDIR_v9)/*.7) +MANPAGES_v9_8=$(wildcard $(MANDIR_v9)/*.8) +MANPAGES_v9=$(MANPAGES_v9_5) $(MANPAGES_v9_7) $(MANPAGES_v9_8) + +MANPAGES_v84_5=$(wildcard $(MANDIR_v84)/*.5) +MANPAGES_v84_8=$(wildcard $(MANDIR_v84)/*.8) +MANPAGES_v84=$(MANPAGES_v84_5) $(MANPAGES_v84_8) + +OUTMANS_v9_5=$(patsubst $(MANDIR_v9)/%.5,$(OUTDIRHTML)/v9/%.5.html, $(MANPAGES_v9_5)) +OUTMANS_v9_7=$(patsubst $(MANDIR_v9)/%.7,$(OUTDIRHTML)/v9/%.7.html, $(MANPAGES_v9_7)) +OUTMANS_v9_8=$(patsubst $(MANDIR_v9)/%.8,$(OUTDIRHTML)/v9/%.8.html, $(MANPAGES_v9_8)) + +OUTMANS_v84_5=$(patsubst $(MANDIR_v84)/%.5,$(OUTDIRHTML)/v84/%.5.html, $(MANPAGES_v84_5)) +OUTMANS_v84_8=$(patsubst $(MANDIR_v84)/%.8,$(OUTDIRHTML)/v84/%.8.html, $(MANPAGES_v84_8)) + +OUTMANS=$(OUTMANS_v9_5) $(OUTMANS_v9_7) $(OUTMANS_v9_8) $(OUTMANS_v84_5) $(OUTMANS_v84_8) + +define run-mandoc = + cat $< | mandoc -Thtml -Ostyle=mandoc.css 1> $@ + mps=""; \ + echo $@ | grep -q 84 && mps="$(MANPAGES_v84)" || mps="$(MANPAGES_v9)"; \ + for mp in $$mps; do \ + last=$$(echo -n $$mp | tail -c 1); \ + bn=$$(basename $$(basename $$(basename $$mp .5) .8) .7); \ + sed -i "s/$$bn<\/b>/$$bn<\/a><\/b>/g" $@; \ + done; +endef + +$(OUTDIRHTML)/v9/%.5.html: $(MANDIR_v9)/%.5 + $(run-mandoc) + +$(OUTDIRHTML)/v9/%.7.html: $(MANDIR_v9)/%.7 + $(run-mandoc) + +$(OUTDIRHTML)/v9/%.8.html: $(MANDIR_v9)/%.8 + $(run-mandoc) + +$(OUTDIRHTML)/v84/%.5.html: $(MANDIR_v84)/%.5 + $(run-mandoc) + +$(OUTDIRHTML)/v84/%.8.html: $(MANDIR_v84)/%.8 + $(run-mandoc) + +define run-index = + mps=""; \ + desc=""; \ + echo $@ | grep -q v84; \ + if [ "$$?" = "1" ]; then \ + mps="$(MANPAGES_v9)"; desc="9.0"; \ + else \ + mps="$(MANPAGES_v84)"; desc="8.4"; \ + fi; \ + echo "" > $@; \ + echo "" >> $@; \ + echo '' >> $@; \ + echo "" >> $@; \ + echo "

DRBD $${desc} Manual Pages

" >> $@; \ + for mp in $$mps; do \ + bn=$$(basename $$mp); \ + echo "
$$bn" >> $@; \ + done; \ + echo "" >> $@ +endef + +.PHONY: $(OUTDIRHTML)/v84/index.html +$(OUTDIRHTML)/v84/index.html: + $(run-index) + +.PHONY: $(OUTDIRHTML)/v9/index.html +$(OUTDIRHTML)/v9/index.html: + $(run-index) + +.PHONY: dirs +dirs: + mkdir -p $(OUTDIRHTML)/v84/ || true + mkdir -p $(OUTDIRHTML)/v9/ || true + +.PHONY: style +style: dirs + # brr, nasty hack + cp -f mandoc.css $(OUTDIRHTML)/v84/ + cp -f mandoc.css $(OUTDIRHTML)/v9/ + +html: style $(OUTMANS) $(OUTDIRHTML)/v84/index.html $(OUTDIRHTML)/v9/index.html + @echo "Generated web page in $$(pwd)/$(OUTDIRHTML)" + @echo "execute 'make html-finalize' to prepare upload" + +html-finalize: html + rm -rf $(OUTDIRHTMLFINAL) && mkdir $(OUTDIRHTMLFINAL) && touch $(OUTDIRHTMLFINAL)/.empty + for d in v84 v9; do \ + case "$$d" in \ + v84) fn=8.4;; \ + v9) fn=9.0;; \ + esac; \ + (cd $(OUTDIRHTML)/$$d; zip ../../$(OUTDIRHTMLFINAL)/man-pages-$(MAN_LANG)-$${fn}.zip ./*.html);\ + done + +pdf: + $(error "pdf generation is not supported") + +pdf-finalize: pdf + +clean: + rm -rf $(OUTDIRHTML)/* $(OUTDIRHTMLFINAL)/* diff -Nru drbd-doc-8.4~20151102/man-pages/mandoc.css drbd-doc-8.4~20220106/man-pages/mandoc.css --- drbd-doc-8.4~20151102/man-pages/mandoc.css 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/man-pages/mandoc.css 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,203 @@ +/* $OpenBSD: mandoc.css,v 1.9 2017/07/16 18:44:15 schwarze Exp $ */ +/* + * Standard style sheet for mandoc(1) -Thtml and man.cgi(8). + */ + +/* Global defaults. */ + +html { max-width: 100ex; } +body { font-family: Helvetica,Arial,sans-serif; } +table { margin-top: 0em; + margin-bottom: 0em; } +td { vertical-align: top; } +ul, ol, dl { margin-top: 0em; + margin-bottom: 0em; } +li, dt { margin-top: 1em; } + +a.selflink { border-bottom: thin dotted; + color: inherit; + font: inherit; + text-decoration: inherit; } +* { clear: both } + +/* Search form and search results. */ + +fieldset { border: thin solid silver; + border-radius: 1em; + text-align: center; } +input[name=expr] { + width: 25%; } + +table.results { margin-top: 1em; + margin-left: 2em; + font-size: smaller; } + +/* Header and footer lines. */ + +table.head { width: 100%; + border-bottom: 1px dotted #808080; + margin-bottom: 1em; + font-size: smaller; } +td.head-vol { text-align: center; } +td.head-rtitle { + text-align: right; } +span.Nd { } + +table.foot { width: 100%; + border-top: 1px dotted #808080; + margin-top: 1em; + font-size: smaller; } +td.foot-os { text-align: right; } + +/* Sections and paragraphs. */ + +div.manual-text { + margin-left: 5ex; } +h1.Sh { margin-top: 2ex; + margin-bottom: 1ex; + margin-left: -4ex; + font-size: 110%; } +h2.Ss { margin-top: 2ex; + margin-bottom: 1ex; + margin-left: -2ex; + font-size: 105%; } +div.Pp { margin: 1ex 0ex; } +a.Sx { } +a.Xr { } + +/* Displays and lists. */ + +div.Bd { } +div.D1 { margin-left: 5ex; } + +ul.Bl-bullet { list-style-type: disc; + padding-left: 1em; } +li.It-bullet { } +ul.Bl-dash { list-style-type: none; + padding-left: 0em; } +li.It-dash:before { + content: "\2014 "; } +ul.Bl-item { list-style-type: none; + padding-left: 0em; } +li.It-item { } +ul.Bl-compact > li { + margin-top: 0ex; } + +ol.Bl-enum { padding-left: 2em; } +li.It-enum { } +ol.Bl-compact > li { + margin-top: 0ex; } + +dl.Bl-diag { } +dt.It-diag { } +dd.It-diag { margin-left: 0ex; } +b.It-diag { font-style: normal; } +dl.Bl-hang { } +dt.It-hang { } +dd.It-hang { margin-left: 10.2ex; } +dl.Bl-inset { } +dt.It-inset { } +dd.It-inset { margin-left: 0ex; } +dl.Bl-ohang { } +dt.It-ohang { } +dd.It-ohang { margin-left: 0ex; } +dl.Bl-tag { margin-left: 10.2ex; } +dt.It-tag { float: left; + margin-top: 0ex; + margin-left: -10.2ex; + padding-right: 2ex; + vertical-align: top; } +dd.It-tag { clear: right; + width: 100%; + margin-top: 0ex; + margin-left: 0ex; + vertical-align: top; + overflow: auto; } +dl.Bl-compact > dt { + margin-top: 0ex; } + +table.Bl-column { } +tr.It-column { } +td.It-column { margin-top: 1em; } +table.Bl-compact > tbody > tr > td { + margin-top: 0ex; } + +cite.Rs { font-style: normal; + font-weight: normal; } +span.RsA { } +i.RsB { font-weight: normal; } +span.RsC { } +span.RsD { } +i.RsI { font-weight: normal; } +i.RsJ { font-weight: normal; } +span.RsN { } +span.RsO { } +span.RsP { } +span.RsQ { } +span.RsR { } +span.RsT { text-decoration: underline; } +a.RsU { } +span.RsV { } + +span.eqn { } +table.tbl { } + +/* Semantic markup for command line utilities. */ + +table.Nm { } +b.Nm { font-style: normal; } +b.Fl { font-style: normal; } +b.Cm { font-style: normal; } +var.Ar { font-style: italic; + font-weight: normal; } +span.Op { } +b.Ic { font-style: normal; } +code.Ev { font-style: normal; + font-weight: normal; + font-family: monospace; } +i.Pa { font-weight: normal; } + +/* Semantic markup for function libraries. */ + +span.Lb { } +b.In { font-style: normal; } +a.In { } +b.Fd { font-style: normal; } +var.Ft { font-style: italic; + font-weight: normal; } +b.Fn { font-style: normal; } +var.Fa { font-style: italic; + font-weight: normal; } +var.Vt { font-style: italic; + font-weight: normal; } +var.Va { font-style: italic; + font-weight: normal; } +code.Dv { font-style: normal; + font-weight: normal; + font-family: monospace; } +code.Er { font-style: normal; + font-weight: normal; + font-family: monospace; } + +/* Various semantic markup. */ + +span.An { } +a.Lk { } +a.Mt { } +b.Cd { font-style: normal; } +i.Ad { font-weight: normal; } +b.Ms { font-style: normal; } +span.St { } +a.Ux { } + +/* Physical markup. */ + +.No { font-style: normal; + font-weight: normal; } +.Em { font-style: italic; + font-weight: normal; } +.Sy { font-style: normal; + font-weight: bold; } +.Li { font-style: normal; + font-weight: normal; + font-family: monospace; } diff -Nru drbd-doc-8.4~20151102/README drbd-doc-8.4~20220106/README --- drbd-doc-8.4~20151102/README 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/README 1970-01-01 00:00:00.000000000 +0000 @@ -1,115 +0,0 @@ -Notes for DRBD Documentation Maintainers -======================================== - -Checking out the documentation sources --------------------------------------- - -The documentation sources live in the public DRBD git repository at -git.linbit.com. You check them out using the following git command: - ------------------------------------ -git clone git://git.linbit.com/drbd-documentation ------------------------------------ - -This will create a local copy of the documentation sources in a -subdirectory named "drbd-documentation" which git automatically -creates in your current working directory. Be sure to frequently -update your documentation sources with the following commands: - ------------------------------------ -cd drbd-documentation -git pull ------------------------------------ - -When you have made changes, please commit them in your local -checkout. Group changes that "logically" belong together in one -commit, and be sure to include an informative commit message: - ------------------------------------ -git add -git commit ------------------------------------ - - -Building the documentation --------------------------- - -The DRBD documentation uses makedoc, a GNU Autotools based document -processing toolchain. makedoc is hosted here: - -http://github.com/fghaas/makedoc - -Once you have created a makedoc checkout, run the following commands -from the root of the DRBD documentation tree (i.e., from the directory -that contains the file you are reading now): - ------------------------------------ -make MAKEDOC=/path/to/your/makedoc/checkout -./autogen.sh -./configure -cd -make . ------------------------------------ - -makedoc is highly configurable; check ./configure --help for supported -options. - -For example, in order to build the DRBD User's Guide in PDF form, this -is what you would do: - ------------------------------------ -./configure --with-asciidoc-doctype=book -cd users-guide -make drbd-users-guide.pdf ------------------------------------ - -For the User's Guide, specifically, you can also pull in the DRBD man -pages. To do so, you will have to have a DRBD git checkout somewhere -on your system (you may clone this from +git.linbit.com+). You also -need to have done +make doc+ in that DRBD checkout to build the man -pages. Then, to build the User's Guide with man pages included, you -would do: - ------------------------------------ -./configure --with-asciidoc-doctype=book -cd users-guide -make DRBD=/path/to/your/drbd-checkout -make drbd-users-guide.pdf ------------------------------------ - -Some subdirectories also contain convenience targets building a -document format including all of its dependencies. In order to build -the User's Guide in HTML and PDF formats including all graphics, run -the following commands: - ------------------------------------ -./configure --with-asciidoc-doctype=book -cd users-guide -make html pdf ------------------------------------ - -Modifying and maintaining the documentation -------------------------------------------- - -In order to modify the documentation, all you need is the text editor -of your choice, and the git version control system. The documentation -syntax is AsciiDoc; see http://www.methods.co.nz/asciidoc/ for details -on this format. - - -Submitting documentation patches --------------------------------- - -If you modify the documentation, please subscribe to the drbd-dev -mailing list at http://lists.linbit.com/listinfo/drbd-dev and start -sending patches to drbd-dev@lists.linbit.com. - -Please submit patches in a format that makes it easy for us to apply -them to the repository: - ------------------------------------ -git pull -# Now edit and issue "git commit -a" for each. When done: -git format-patch origin -git send-email --to=drbd-dev@lists.linbit.com *.patch ------------------------------------ diff -Nru drbd-doc-8.4~20151102/README.adoc drbd-doc-8.4~20220106/README.adoc --- drbd-doc-8.4~20151102/README.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/README.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,115 @@ += Using the LINBIT Documentation Framework + +== Rendered version + +=== User's Guides +If you are interested in the rendered version of the User's Guide, please read them at +https://docs.linbit.com[docs.linbit.com]. + +=== Tech Guides +You can download pdf versions for free from https://www.linbit.com/tech-guides-overview/[our home page]. + + +== Build dependencies +If you have `docker`, execute `make dockerimage`, which will generate a "linbit-documentation" base +image containing all the dependencies for generating html and pdf outputs. + +Otherwise you will need `GNU make` and you have to install the following dependencies: + +=== HTML targets (UGs/tech-guides) +- asciidoctor +- inkscape + +=== HTML targets (man pages) +- mandoc + +=== PDF targets +- https://github.com/asciidoctor/asciidoctor-pdf[asciidoctor-pdf] + +== Fonts +=== Linbit Fonts +We do not publish the official LINBIT fonts in that repository. Public projects have to be able to generated +pdfs without LINBIT's fonts, private ones are allowed to fail if the `linbit-fonts` directory does not exist. +Actually they *should* fail, which is the default anyways. + +If you build official pdfs/private projects, make sure you cloned the internal `linbit-fonts` repository. + +=== Japanese Fonts +If you intend to generate the `pdf` versions of the Japanese users guides, make sure you downloaded this +https://packages.linbit.com/public/genshingothic-20150607.zip[zip archive]. Extract the archive to a +`genshingothic-fonts` directory in the root of the documentation. + +== Tech Guides +We do not publish the source of our tech guides, make sure you cloned the internal `tech-guides` repository. + +== Makefile interface/API for projects +Projects are organized in sub directories, for example the users guide for DRBD 9 is in `UG9`. The top level +`Makefile` contains html and pdf targets for these (e.g., `make UG9-pdf-finalize`). The final output is +generated in `$project/$lang/output-$format-finalize` (e.g. `UG9/en/output-pdf-finalize`). + +Every project needs a *proper* `Makefile` that has the following targets: + +- `pdf` +- `pdf-finalize` +- `html` +- `html-finalize` + +If a project only generates pdf output, implement the html targets as empty. + +=== pdf and html targets +These generate their output to `output-$format`. It is perfectly fine that these directories contain temporary +files like symlinks. As already written, we want proper ``Makefile``s, so if the source does not change +re-executing these targets should only process files that changed. + +=== pdf-finalize and html-finalize targets +These generate their output to `output-$format-finalize`. This is the *final* output. The one that is +published to a web page/sent to a web developer. For example this generates tar-balls for `UG9` that can be +sent to someone who puts it on the web page. + +It is usually the final target that is executed after multiple iterations of `make pdf`/`make html` and it is +fine if that target alters the content of `output-$format` to generate `output-$format-finalize`. If possible +it should not, but it is not a strict requirement. + +=== Docker targets +The top-level ``Makefile`` also contains targets that end in "-docker". These can be used to generate the +output with the previously describe "linbit-documentation" base image. For example one can execute +`make UG9-html-finalize-docker`. + +=== Internationalization +The English version is the default, but if you want to build the Japanese version, you have to set the `make` +variable "lang" accordingly (e.g., `make UG9-html-finalize-docker lang=ja`). +Japanese version is created by English adoc files and Japanese po files. +Pot files used for localization can be created by the pot target, +(e.g, `make UG9-pot-docker). +Make sure created pot files include correct sentences. + +[[work-public]] +== Working on a public project +- `cd` to the project (e.g, `cd UG9/en`) +- modify sources accordingly +- `make pdf` or `make html` + +Output is generated in `output-$format`. These directories (in contrast to `output-$format-finalize`) can +contain temporary files (symlinks, processed adoc files,...). When you are satisfied, `make $format-finalize`, +to generate the final output in `output-$format-finalize`. + +== Working on a private project +- make sure you are at the top-level of the framework (`linbit-documentation`) +- `git clone` the private project +- follow <> + +== Style: +- http://asciidoctor.org/docs/asciidoc-writers-guide/[Read it, learn it, live it!] +- Hostnames: 'bob' => 'bob' +- Commands: \`rm -rf` => `rm -rf` +- DRBD states: \_Primary_ => _Primary_ +- Blocks: Add newline before and after of the block. +``` +* Re-enable your DRBD resource: + +---------------------------- +# drbdadm up +---------------------------- + +* On one node, promote the DRBD resource: +``` diff -Nru drbd-doc-8.4~20151102/README.txt drbd-doc-8.4~20220106/README.txt --- drbd-doc-8.4~20151102/README.txt 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/README.txt 1970-01-01 00:00:00.000000000 +0000 @@ -1,115 +0,0 @@ -Notes for DRBD Documentation Maintainers -======================================== - -Checking out the documentation sources --------------------------------------- - -The documentation sources live in the public DRBD git repository at -git.linbit.com. You check them out using the following git command: - ------------------------------------ -git clone git://git.linbit.com/drbd-documentation ------------------------------------ - -This will create a local copy of the documentation sources in a -subdirectory named "drbd-documentation" which git automatically -creates in your current working directory. Be sure to frequently -update your documentation sources with the following commands: - ------------------------------------ -cd drbd-documentation -git pull ------------------------------------ - -When you have made changes, please commit them in your local -checkout. Group changes that "logically" belong together in one -commit, and be sure to include an informative commit message: - ------------------------------------ -git add -git commit ------------------------------------ - - -Building the documentation --------------------------- - -The DRBD documentation uses makedoc, a GNU Autotools based document -processing toolchain. makedoc is hosted here: - -http://github.com/fghaas/makedoc - -Once you have created a makedoc checkout, run the following commands -from the root of the DRBD documentation tree (i.e., from the directory -that contains the file you are reading now): - ------------------------------------ -make MAKEDOC=/path/to/your/makedoc/checkout -./autogen.sh -./configure -cd -make . ------------------------------------ - -makedoc is highly configurable; check ./configure --help for supported -options. - -For example, in order to build the DRBD User's Guide in PDF form, this -is what you would do: - ------------------------------------ -./configure --with-asciidoc-doctype=book -cd users-guide -make drbd-users-guide.pdf ------------------------------------ - -For the User's Guide, specifically, you can also pull in the DRBD man -pages. To do so, you will have to have a DRBD git checkout somewhere -on your system (you may clone this from +git.linbit.com+). You also -need to have done +make doc+ in that DRBD checkout to build the man -pages. Then, to build the User's Guide with man pages included, you -would do: - ------------------------------------ -./configure --with-asciidoc-doctype=book -cd users-guide -make DRBD=/path/to/your/drbd-checkout -make drbd-users-guide.pdf ------------------------------------ - -Some subdirectories also contain convenience targets building a -document format including all of its dependencies. In order to build -the User's Guide in HTML and PDF formats including all graphics, run -the following commands: - ------------------------------------ -./configure --with-asciidoc-doctype=book -cd users-guide -make html pdf ------------------------------------ - -Modifying and maintaining the documentation -------------------------------------------- - -In order to modify the documentation, all you need is the text editor -of your choice, and the git version control system. The documentation -syntax is AsciiDoc; see http://www.methods.co.nz/asciidoc/ for details -on this format. - - -Submitting documentation patches --------------------------------- - -If you modify the documentation, please subscribe to the drbd-dev -mailing list at http://lists.linbit.com/listinfo/drbd-dev and start -sending patches to drbd-dev@lists.linbit.com. - -Please submit patches in a format that makes it easy for us to apply -them to the repository: - ------------------------------------ -git pull -# Now edit and issue "git commit -a" for each. When done: -git format-patch origin -git send-email --to=drbd-dev@lists.linbit.com *.patch ------------------------------------ diff -Nru drbd-doc-8.4~20151102/stylesheets/pdf-style-en.yml drbd-doc-8.4~20220106/stylesheets/pdf-style-en.yml --- drbd-doc-8.4~20151102/stylesheets/pdf-style-en.yml 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/stylesheets/pdf-style-en.yml 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,121 @@ +## For reference: +## https://github.com/asciidoctor/asciidoctor-pdf/blob/master/docs/theming-guide.adoc#theme-related-document-attributes +## https://github.com/asciidoctor/asciidoctor-pdf/tree/master/data/themes +# Color codes for LINBIT logos +# Orange: #f68e1f +# "Blue": #1e2939 +# +## Set the LINBIT fonts +font: + catalog: + LINBIT: + normal: Foustmdo.ttf + italic: Founmrgi.ttf + bold: Founmb__.ttf + bold_italic: Founmm__.ttf + unifont: + normal: unifont.ttf + bold: unifont.ttf + italic: unifont.ttf + bold_italic: unifont.ttf + LINBITFallback: + normal: mplus1p-regular-fallback.ttf + italic: mplus1p-regular-fallback.ttf + bold: mplus1p-regular-fallback.ttf + bold_italic: mplus1p-regular-fallback.ttf + fallbacks: + - LINBITFallback +## General Page Settings +page: + size: A4 + margin: [0.8in, 0.66in, 0.8in, 0.66in] +title-page: + logo: + image: image:../images/linbit-logo-2017.svg[] + top: -16.25% + title: + font_style: bold + subtitle: + font_style: bold_italics + authors: + font_style: normal +## Formatting for Base Sections +base: + font_size: 10 + font_family: LINBIT +## Formatting for Abstract (here as example) +abstract: + font_size: $base_font_size +## Formatting for Links +link: + font_color: #f68e1f +## Formatting for Code Blocks +code: + background_color: #f5f5f5 + font_size: 10 + font_family: unifont +## Formatting for Table of Contents +toc: + indent: 20 + font_size: $base_font_size + levels: all +admonition: + icon: + note: + name: fa-pencil-square-o + stroke_color: #000000 + size: 24 + tip: + name: fa-lightbulb-o + stroke_color: #f68e1f + size: 24 + warning: + name: fa-times-circle + stroke_color: #b71b00 + size: 24 + caution: + name: fa-exclamation-triangle + stroke_color: #ffe100 + size: 24 + important: + name: fa-exclamation-circle + stroke_color: #b71b00 + size: 24 +## Formatting for Header/Footer (recto/verso alternates info for printing) +header: + content: '{chapter-title} - {section-title}' + border_style: solid + border_color: #dddddd + font_size: 9 + font_style: bold_italic + font_color: #464646 + height: 0.6in + line_height: 1 + recto: + center: + content: '{document-title}: {section-or-chapter-title}' + verso: + center: + content: '{document-title}: {section-or-chapter-title}' +footer: + border_style: solid + border_color: #dddddd + font_size: 9 + font_color: #464646 + height: 0.6in + line_height: 1 + columns: =100% + recto: + center: + content: '{page-number}' + right: + content: + left: + content: + verso: + center: + content: '{page-number}' + right: + content: + left: + content: diff -Nru drbd-doc-8.4~20151102/stylesheets/pdf-style-ja.yml drbd-doc-8.4~20220106/stylesheets/pdf-style-ja.yml --- drbd-doc-8.4~20151102/stylesheets/pdf-style-ja.yml 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/stylesheets/pdf-style-ja.yml 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,125 @@ +## For reference: +## https://github.com/asciidoctor/asciidoctor-pdf/blob/master/docs/theming-guide.adoc#theme-related-document-attributes +## https://github.com/asciidoctor/asciidoctor-pdf/tree/master/data/themes +# Color codes for LINBIT logos +# Orange: #f68e1f +# "Blue": #1e2939 +# +## Set the LINBIT fonts +font: + catalog: + LINBIT: + normal: GenShinGothic-Normal.ttf + italic: GenShinGothic-Normal.ttf + bold: GenShinGothic-Bold.ttf + bold_italic: GenShinGothic-Bold.ttf + unifont: + normal: GenShinGothic-Monospace-Medium.ttf + italic: GenShinGothic-Monospace-Medium.ttf + bold: GenShinGothic-Monospace-Bold.ttf + bold_italic: GenShinGothic-Monospace-Bold.ttf + LINBITFallback: + normal: GenShinGothic-Normal.ttf + italic: GenShinGothic-Normal.ttf + bold: GenShinGothic-Bold.ttf + bold_italic: GenShinGothic-Bold.ttf + fallbacks: + - LINBITFallback +## General Page Settings +page: + size: A4 + margin: [0.8in, 0.66in, 0.8in, 0.66in] +title-page: + logo: + image: image:../images/linbit-logo-2017.svg[] + top: -16.25% + title: + font_style: bold + subtitle: + font_style: bold_italics + authors: + font_style: normal +## Formatting for Base Sections +base: + font_size: 10 + font_family: LINBIT + align: left + +## Formatting for Abstract (here as example) +abstract: + font_size: $base_font_size +## Formatting for Links +link: + font_color: #f68e1f +## Formatting for Code Blocks +code: + background_color: #f5f5f5 + font_size: 10 + font_family: unifont +## Formatting for Table of Contents +toc: + indent: 20 + font_size: $base_font_size + levels: all +admonition: + icon: + note: + name: fa-pencil-square-o + stroke_color: #000000 + size: 24 + tip: + name: fa-lightbulb-o + stroke_color: #f68e1f + size: 24 + warning: + name: fa-times-circle + stroke_color: #b71b00 + size: 24 + caution: + name: fa-exclamation-triangle + stroke_color: #ffe100 + size: 24 + important: + name: fa-exclamation-circle + stroke_color: #b71b00 + size: 24 +## Formatting for Header/Footer (recto/verso alternates info for printing) +header: + content: '{chapter-title} - {section-title}' + border_style: solid + border_color: #dddddd + font_size: 9 + font_style: bold_italic + font_color: #464646 + height: 0.6in + line_height: 1 + recto: + center: + content: '{document-title}: {section-or-chapter-title}' + verso: + center: + content: '{document-title}: {section-or-chapter-title}' +footer: + border_style: solid + border_color: #dddddd + font_size: 9 + font_color: #464646 + height: 0.6in + line_height: 1 + columns: =100% + recto: + center: + content: '{page-number}' + right: + content: + left: + content: + verso: + center: + content: '{page-number}' + right: + content: + left: + content: +literal: + font_family: unifont diff -Nru drbd-doc-8.4~20151102/todo.xml drbd-doc-8.4~20220106/todo.xml --- drbd-doc-8.4~20151102/todo.xml 2015-11-02 13:15:36.000000000 +0000 +++ drbd-doc-8.4~20220106/todo.xml 1970-01-01 00:00:00.000000000 +0000 @@ -1,4 +0,0 @@ - - - This section remains to be written. - diff -Nru drbd-doc-8.4~20151102/UG8.4/cn/Makefile drbd-doc-8.4~20220106/UG8.4/cn/Makefile --- drbd-doc-8.4~20151102/UG8.4/cn/Makefile 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/cn/Makefile 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,4 @@ +lang=cn + +include ../../UG-build-adoc.mk +include ../../UG-build.mk diff -Nru drbd-doc-8.4~20151102/UG8.4/en/about.adoc drbd-doc-8.4~20220106/UG8.4/en/about.adoc --- drbd-doc-8.4~20151102/UG8.4/en/about.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/about.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,55 @@ +[[about]] +[preface] +== Please Read This First + +This guide is intended to serve users of the Distributed Replicated +Block Device (DRBD) as a definitive reference guide and handbook. + +It is being made available to the DRBD community by +http://www.linbit.com/[LINBIT], the project's sponsor company, free of +charge and in the hope that it will be useful. The guide is +constantly being updated. We try to add information +about new DRBD features simultaneously with the corresponding DRBD +releases. An on-line HTML version of this guide is always available at +http://www.drbd.org/users-guide/. + +IMPORTANT: This guide assumes, throughout, that you are using DRBD +version 8.4.0 or later. If you are using a pre-8.4 release of DRBD +, please use the version of this guide which has been +preserved at http://www.drbd.org/users-guide-8.3/. + +Please use <> to submit +comments. + +This guide is organized in seven parts: + +* <> deals with DRBD's basic functionality. It gives a short + overview of DRBD's positioning within the Linux I/O stack, and about + fundamental DRBD concepts. It also examines DRBD's most important + features in detail. + +* <> talks about building DRBD from + source, installing pre-built DRBD packages, and contains an overview + of getting DRBD running on a cluster system. + +* <> is about managing DRBD, configuring and reconfiguring + DRBD resources, and common troubleshooting scenarios. + +* <> deals with leveraging DRBD to add storage replication and + high availability to applications. It not only covers DRBD + integration in the Pacemaker cluster manager, but also advanced LVM + configurations, integration of DRBD with GFS, and adding high + availability to Xen virtualization environments. + +* <> contains pointers for getting the best performance + out of DRBD configurations. + +* <> dives into DRBD's internals, and also contains pointers + to other resources which readers of this guide may find useful. + +* <>: +** <> is an overview of changes in DRBD 8.4, compared to +earlier DRBD versions. + +Users interested in DRBD training or support services are invited to +contact us at sales@linbit.com or sales_us@linbit.com. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/administration.adoc drbd-doc-8.4~20220106/UG8.4/en/administration.adoc --- drbd-doc-8.4~20151102/UG8.4/en/administration.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/administration.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,2126 @@ +ifdef::env-github[] +:tip-caption: :bulb: +:note-caption: :information_source: +:important-caption: :heavy_exclamation_mark: +:caution-caption: :fire: +:warning-caption: :warning: +endif::[] + +[[ch-admin]] +== Common administrative tasks + +This chapter outlines typical administrative tasks encountered during +day-to-day operations. It does not cover troubleshooting tasks, these +are covered in detail in <>. + +[[s-check-status]] +=== Checking DRBD status + +[[s-drbd-overview]] +==== Retrieving status with `drbd-overview` + +One convenient way to look at DRBD's status is the +indexterm:[drbd-overview]`drbd-overview` utility. + +---------------------------- +# drbd-overview +0:home Connected Primary/Secondary + UpToDate/UpToDate C r--- /home xfs 200G 158G 43G 79% +1:data Connected Primary/Secondary + UpToDate/UpToDate C r--- /mnt/ha1 ext3 9.9G 618M 8.8G 7% +2:nfs-root Connected Primary/Secondary + UpToDate/UpToDate C r--- /mnt/netboot ext3 79G 57G 19G 76% +---------------------------- + +[[s-drbdadm-status]] +==== Status information via `drbdadm` + +indexterm:[drbdadm status]In its simplest invocation, we just ask for the +status of a single resource. + +---------------------------- +# drbdadm status home +home role:Secondary + disk:UpToDate + peer role:Secondary + replication:Established peer-disk:UpToDate +---------------------------- + +This here just says that the resource `home` is locally and on peer +_UpToDate_ and _Secondary_; so the two nodes have the same +data on their storage devices, and nobody is using the device currently. + +You can get more information by passing the `--verbose` and/or +`--statistics` arguments to `drbdsetup`: + +---------------------------- +# drbdsetup status home --verbose --statistics +home role:Secondary suspended:no + write-ordering:flush + volume:0 minor:0 disk:UpToDate + size:5033792 read:0 written:0 al-writes:0 bm-writes:0 upper-pending:0 + lower-pending:0 al-suspended:no blocked:no + peer connection:Connected role:Secondary congested:no + volume:0 replication:Established peer-disk:UpToDate + resync-suspended:no + received:0 sent:0 out-of-sync:0 pending:0 unacked:0 +---------------------------- + +Every few lines in this example form a block that is repeated +for every node used in this resource, with small format exceptions +for the local node - see below for more details. + +The first line in each block shows the `role` (see <>). + +The next important line begins with the `volume` specification; normally +these are numbered starting by zero, but the configuration may specify +other IDs as well. This line shows the indexterm:[connection state] +connection state in the +`replication` item (see <> for details) and the +remote indexterm:[disk state] disk state in `disk` (see <>). +Then there's a line for this volume giving a bit of statistics - +data `received`, `sent`, `out-of-sync`, etc; please see +<> for more information. + +For the local node the first line shows the resource name, `home`, in our +example. As the first block always describes the local node, there is no `Connection` or +address information. + +please see the `drbd.conf` manual page for more information. + +The other four lines in this example form a block that is repeated for +every DRBD device configured, prefixed by the device minor number. In +this case, this is `0`, corresponding to the device `/dev/drbd0`. + +The resource-specific output contains various pieces +of information about the resource: + +Replication protocol used by the resource. Either `A`, `B` or `C`. See +<> for details. + +[[s-drbdsetup-events2]] +==== One-shot or realtime monitoring via `drbdsetup events2` + +NOTE: This is available only with userspace versions 8.9.3 and kernel module +8.4.6, and up. + +This is a low-level mechanism to get information out of DRBD, suitable for use +in automated tools, like monitoring. + +In its simplest invocation, showing only the current status, the output looks +like this (but, when running on a terminal, will include colors): + +.'drbdsetup' example output (lines broken for readability) +----------------- +# drbdsetup events2 --now r0 +exists resource name:r0 role:Secondary suspended:no +exists connection name:r0 peer-node-id:1 conn-name:remote-host connection:Connected role:Secondary +exists device name:r0 volume:0 minor:7 disk:UpToDate +exists device name:r0 volume:1 minor:8 disk:UpToDate +exists peer-device name:r0 peer-node-id:1 conn-name:remote-host volume:0 + replication:Established peer-disk:UpToDate resync-suspended:no +exists peer-device name:r0 peer-node-id:1 conn-name:remote-host volume:1 + replication:Established peer-disk:UpToDate resync-suspended:no +exists - +----------------- + +Without the ''--now'', the process will keep running, and send continuous updates like this: + +----------------- +# drbdsetup events2 r0 +... +change connection name:r0 peer-node-id:1 conn-name:remote-host connection:StandAlone +change connection name:r0 peer-node-id:1 conn-name:remote-host connection:Unconnected +change connection name:r0 peer-node-id:1 conn-name:remote-host connection:Connecting +----------------- + +Then, for monitoring purposes, there's another argument ''--statistics'', that +will produce some performance counters and other facts: + +.'drbdsetup' verbose output (lines broken for readability) +----------------- +# drbdsetup events2 --statistics --now r0 +exists resource name:r0 role:Secondary suspended:no write-ordering:drain +exists connection name:r0 peer-node-id:1 conn-name:remote-host connection:Connected role:Secondary congested:no +exists device name:r0 volume:0 minor:7 disk:UpToDate size:6291228 read:6397188 written:131844 + al-writes:34 bm-writes:0 upper-pending:0 lower-pending:0 al-suspended:no blocked:no +exists device name:r0 volume:1 minor:8 disk:UpToDate size:104854364 read:5910680 written:6634548 + al-writes:417 bm-writes:0 upper-pending:0 lower-pending:0 al-suspended:no blocked:no +exists peer-device name:r0 peer-node-id:1 conn-name:remote-host volume:0 replication:Established + peer-disk:UpToDate resync-suspended:no received:0 sent:131844 out-of-sync:0 pending:0 unacked:0 +exists peer-device name:r0 peer-node-id:1 conn-name:remote-host volume:1 replication:Established + peer-disk:UpToDate resync-suspended:no received:0 sent:6634548 out-of-sync:0 pending:0 unacked:0 +exists - +----------------- + +You might also like the `--timestamp` parameter. + + + + +[[s-proc-drbd]] +==== Status information in `/proc/drbd` + +NOTE: ''/proc/drbd'' is deprecated. While it won't be removed in the 8.4 +series, we recommend to switch to other means, like <>; or, +for monitoring even more convenient, <>. + +indexterm:[/proc/drbd]`/proc/drbd` is a virtual file displaying +real-time status information about all DRBD resources currently +configured. You may interrogate this file's contents using this +command: + +---------------------------- +$ cat /proc/drbd +version: 8.4.0 (api:1/proto:86-100) +GIT-hash: 09b6d528b3b3de50462cd7831c0a3791abc665c3 build by linbit@buildsystem.linbit, 2011-10-12 09:07:35 + 0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r----- + ns:0 nr:0 dw:0 dr:656 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 + 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r--- + ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 + 2: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r--- + ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 +---------------------------- + +The first line, prefixed with +version:+, shows the DRBD version used +on your system. The second line contains information about this +specific build. + +The other four lines in this example form a block that is repeated for +every DRBD device configured, prefixed by the device minor number. In +this case, this is `0`, corresponding to the device `/dev/drbd0`. + +The resource-specific output from `/proc/drbd` contains various pieces +of information about the resource: + +.`cs` (connection state) +indexterm:[connection state]Status of the network connection. See +<>for details about the various connection +states. + +.`ro` (roles) +indexterm:[resource]Roles of the nodes. The role of the local node is +displayed first, followed by the role of the partner node shown after +the slash. See <>for details about the possible resource +roles. + +.`ds` (disk states) +indexterm:[disk state]State of the hard disks. Prior to the slash the +state of the local node is displayed, after the slash the state of the +hard disk of the partner node is shown. See <>for +details about the various disk states. + +.Replication protocol +Replication protocol used by the resource. Either `A`, `B` or `C`. See +<> for details. + +.I/O Flags +Six state flags reflecting the I/O status of this resource. See +<> for a detailed explanation of these flags. + +.Performance indicators +A number of counters and gauges reflecting the resource's utilization +and performance. See <> for details. + + + +[[s-connection-states]] +==== Connection states + +indexterm:[connection state]A resource's connection state can be +observed either by monitoring `/proc/drbd`, or by issuing the `drbdadm +cstate` command: + +---------------------------- +# drbdadm cstate +Connected +---------------------------- + +A resource may have one of the following connection states: + +._StandAlone_ +indexterm:[connection state]No network configuration available. The +resource has not yet been connected, or has been administratively +disconnected (using `drbdadm disconnect`), or has dropped its +connection due to failed authentication or split brain. + +._Disconnecting_ +indexterm:[connection state]Temporary state during disconnection. The +next state is _StandAlone_. + +._Unconnected_ +indexterm:[connection state]Temporary state, prior to a connection +attempt. Possible next states: _WFConnection_ and _WFReportParams_. + +._Timeout_ +indexterm:[connection state]Temporary state following a timeout in the +communication with the peer. Next state: _Unconnected_. + +._BrokenPipe_ +indexterm:[connection state]Temporary state after the connection to +the peer was lost. Next state: _Unconnected_. + +._NetworkFailure_ +indexterm:[connection state]Temporary state after the connection to +the partner was lost. Next state: _Unconnected_. + +._ProtocolError_ +indexterm:[connection state]Temporary state after the connection to +the partner was lost. Next state: _Unconnected_. + +._TearDown_ +indexterm:[connection state]Temporary state. The peer is closing the +connection. Next state: _Unconnected_. + +._WFConnection_ +indexterm:[connection state]This node is waiting until the peer node +becomes visible on the network. + +._WFReportParams_ +indexterm:[connection state]TCP connection has been established, this +node waits for the first network packet from the peer. + +._Connected_ +indexterm:[connection state]A DRBD connection has been established, +data mirroring is now active. This is the normal state. + +._StartingSyncS_ +indexterm:[connection state]Full synchronization, initiated by the +administrator, is just starting. The local node will be the source of +synchronization. The next possible states are: _SyncSource_ or +_PausedSyncS_. + +._StartingSyncT_ +indexterm:[connection state]Full synchronization, initiated by the +administrator, is just starting. The local node will be the target of +synchronization. Next state: _WFSyncUUID_. + +._WFBitMapS_ +indexterm:[connection state]Partial synchronization is just +starting. The local node will be the source of synchronization. Next +possible states: _SyncSource_ or _PausedSyncS_. + +._WFBitMapT_ +indexterm:[connection state]Partial synchronization is just +starting. The local node will be the target of synchronization. Next +possible state: _WFSyncUUID_. + +._WFSyncUUID_ +indexterm:[connection state]Synchronization is about to begin. Next +possible states: _SyncTarget_ or _PausedSyncT_. + +._SyncSource_ +indexterm:[connection state]Synchronization is currently running, with +the local node being the source of synchronization. + +._SyncTarget_ +indexterm:[connection state]Synchronization is currently running, with +the local node being the target of synchronization. + +._PausedSyncS_ +indexterm:[connection state]The local node is the source of an ongoing +synchronization, but synchronization is currently paused. This may be +due to a dependency on the completion of another synchronization +process, or due to synchronization having been manually interrupted by +`drbdadm pause-sync`. + +._PausedSyncT_ +indexterm:[connection state]The local node is the target of an ongoing +synchronization, but synchronization is currently paused. This may be +due to a dependency on the completion of another synchronization +process, or due to synchronization having been manually interrupted by +`drbdadm pause-sync`. + +._VerifyS_ +indexterm:[connection state]On-line device verification is currently +running, with the local node being the source of verification. + +._VerifyT_ +indexterm:[connection state]On-line device verification is currently +running, with the local node being the target of verification. + + +[[s-roles]] +==== Resource roles + +indexterm:[resource]A resource's role can be observed either by +monitoring `/proc/drbd`, or by issuing the indexterm:[drbdadm] +`drbdadm role` command: + +---------------------------- +# drbdadm role +Primary/Secondary +---------------------------- + +The local resource role is always displayed first, the remote resource +role last. + +You may see one of the following resource roles: + +._Primary_ +The resource is currently in the primary role, and may be read from +and written to. This role only occurs on one of the two nodes, unless +<> is enabled. + +._Secondary_ +The resource is currently in the secondary role. It normally receives +updates from its peer (unless running in disconnected mode), but may +neither be read from nor written to. This role may occur on one +or both nodes. + +._Unknown_ +The resource's role is currently unknown. The local resource role +never has this status. It is only displayed for the peer's resource +role, and only in disconnected mode. + + +[[s-disk-states]] +==== Disk states + +A resource's disk state can be observed either by monitoring +`/proc/drbd`, or by issuing the `drbdadm dstate` command: + +---------------------------- +# drbdadm dstate +UpToDate/UpToDate +---------------------------- + +The local disk state is always displayed first, the remote disk state +last. + +Both the local and the remote disk state may be one of the following: + +._Diskless_ +indexterm:[disk state]No local block device has been assigned to the +DRBD driver. This may mean that the resource has never attached to its +backing device, that it has been manually detached using `drbdadm +detach`, or that it automatically detached after a lower-level I/O +error. + +._Attaching_ +indexterm:[disk state]Transient state while reading meta data. + +._Failed_ +indexterm:[disk state]Transient state following an I/O failure report +by the local block device. Next state: _Diskless_. + +._Negotiating_ +indexterm:[disk state]Transient state when an _Attach_ is carried out on +an already-_Connected_ DRBD device. + +._Inconsistent_ +indexterm:[disk state]The data is inconsistent. This status occurs +immediately upon creation of a new resource, on both nodes (before the +initial full sync). Also, this status is found in one node (the +synchronization target) during synchronization. + +._Outdated_ +indexterm:[disk state]Resource data is consistent, but +<>. + +._DUnknown_ +indexterm:[disk state]This state is used for the peer disk if no +network connection is available. + +._Consistent_ +indexterm:[disk state]Consistent data of a node without +connection. When the connection is established, it is decided whether +the data is _UpToDate_ or _Outdated_. + +._UpToDate_ +indexterm:[disk state]Consistent, up-to-date state of the data. This +is the normal state. + +[[s-io-flags]] +==== I/O state flags + +The I/O state flag field in `/proc/drbd` contains information about +the current state of I/O operations associated with the +resource. There are six such flags in total, with the following +possible values: + +. I/O suspension. Either `r` for _running_ or `s` for _suspended_ + I/O. Normally `r`. + +. Serial resynchronization. When a resource is awaiting + resynchronization, but has deferred this because of a `resync-after` + dependency, this flag becomes `a`. Normally `-`. + +. Peer-initiated sync suspension. When resource is awaiting + resynchronization, but the peer node has suspended it for any + reason, this flag becomes `p`. Normally `-`. + +. Locally initiated sync suspension. When resource is awaiting + resynchronization, but a user on the local node has suspended it, + this flag becomes `u`. Normally `-`. + +. Locally blocked I/O. Normally `-`. May be one of the following + flags: + +** `d`: I/O blocked for a reason internal to DRBD, such as a + transient disk state. +** `b`: Backing device I/O is blocking. +** `n`: Congestion on the network socket. +** `a`: Simultaneous combination of blocking device I/O and network congestion. + +. Activity Log update suspension. When updates to the Activity Log are + suspended, this flag becomes `s`. Normally `-`. + +[[s-performance-indicators]] +==== Performance indicators + +The second line of `/proc/drbd` information for each resource contains +the following counters and gauges: + +.`ns` (network send) +Volume of net data sent to the partner via the network connection; in +Kibyte. + +.`nr` (network receive) +Volume of net data received by the partner via the network connection; +in Kibyte. + +.`dw` (disk write) +Net data written on local hard disk; in Kibyte. + +.`dr` (disk read) +Net data read from local hard disk; in Kibyte. + +.`al` (activity log) +Number of updates of the activity log area of the meta data. + +.`bm` (bit map) +Number of updates of the bitmap area of the meta data. + +.`lo` (local count) +Number of open requests to the local I/O sub-system issued by DRBD. + +.`pe` (pending) +Number of requests sent to the partner, but that have not yet been +answered by the latter. + +.`ua` (unacknowledged) +Number of requests received by the partner via the network connection, +but that have not yet been answered. + +.`ap` (application pending) +Number of block I/O requests forwarded to DRBD, but not yet answered +by DRBD. + +.`ep` (epochs) +Number of epoch objects. Usually 1. Might increase under I/O load when +using either the `barrier` or the `none` write ordering method. + +.`wo` (write order) +Currently used write ordering method: `b`(barrier), `f`(flush), +`d`(drain) or `n`(none). + +.`oos` (out of sync) +Amount of storage currently out of sync; in Kibibytes. + + +[[s-enable-disable]] +=== Enabling and disabling resources + +[[s-enable-resource]] +==== Enabling resources + +indexterm:[resource]Normally, all configured DRBD resources are +automatically enabled + +* by a cluster resource management application at its discretion, + based on your cluster configuration, or + +* by the `/etc/init.d/drbd` init script on system startup. + +If, however, you need to enable resources manually for any reason, you +may do so by issuing the command + +---------------------------- +# drbdadm up +---------------------------- + +As always, you may use the keyword `all` instead of a specific +resource name if you want to enable all resources configured in +`/etc/drbd.conf` at once. + +[[s-disable-resource]] +==== Disabling resources + +indexterm:[resource]You may temporarily disable specific resources by +issuing the command + +---------------------------- +# drbdadm down +---------------------------- + +Here, too, you may use the keyword `all` in place of a resource name if +you wish to temporarily disable all resources listed in +`/etc/drbd.conf` at once. + +[[s-reconfigure]] +=== Reconfiguring resources + +indexterm:[resource]DRBD allows you to reconfigure resources while +they are operational. To that end, + +* make any necessary changes to the resource configuration in + `/etc/drbd.conf`, + +* synchronize your `/etc/drbd.conf` file between both nodes, + +* issue the indexterm:[drbdadm]`drbdadm adjust ` command on + both nodes. + +`drbdadm adjust` then hands off to `drbdsetup` to make the necessary +adjustments to the configuration. As always, you are able to review +the pending `drbdsetup` invocations by running `drbdadm` with the +`-d` (dry-run) option. + +NOTE: When making changes to the `common` section in `/etc/drbd.conf`, +you can adjust the configuration for all resources in one run, by +issuing `drbdadm adjust all`. + +[[s-switch-resource-roles]] +=== Promoting and demoting resources + +indexterm:[resource]Manually switching a <> from secondary to primary (promotion) or vice versa (demotion) +is done using the following commands: + +---------------------------- +# drbdadm primary +# drbdadm secondary +---------------------------- + +In <> (DRBD's default), any +resource can be in the primary role on only one node at any given time +while the <> is +_Connected_. Thus, issuing `drbdadm primary ` on one node +while __ is still in the primary role on the peer will +result in an error. + +A resource configured to allow <> can be switched to the primary role on both nodes. + +[[s-manual-fail-over]] +=== Basic Manual Fail-over + +If not using Pacemaker and looking to handle fail-overs manually in a +passive/active configuration the process is as follows. + +On the current primary node stop any applications or services using the DRBD device, +unmount the DRBD device, and demote the resource to secondary. + +---------------------------- +# umount /dev/drbd/by-res/ +# drbdadm secondary +---------------------------- + +Now on the node we wish to make primary promote the resource and mount the device. + +---------------------------- +# drbdadm primary +# mount /dev/drbd/by-res/ +---------------------------- + +[[s-upgrading-drbd]] +=== Upgrading DRBD + +Upgrading DRBD is a fairly simple process. This section will cover +the process of upgrading from 8.3.x to 8.4.x, however this process +should work for all upgrades. + +[[s-updating-your-repo]] +==== Updating your repository + +Due to the number of changes between the 8.3 and 8.4 branches we +have created separate repositories for each. Perform this repository +update on both servers. + +[[s-RHEL-systems]] +===== RHEL/CentOS systems + +Edit your /etc/yum.repos.d/linbit.repo file to reflect the following +changes. + +---------------------------- +[drbd-8.4] +name=DRBD 8.4 +baseurl=http://packages.linbit.com//8.4/rhel6/ +gpgcheck=0 +---------------------------- + +NOTE: You will have to populate the and variables. The + is provided by LINBIT support services. + +[[s-Debian-Systems]] +===== Debian/Ubuntu systems + +Edit /etc/apt/sources.list to reflect the following changes. + +---------------------------- +deb http://packages.linbit.com//8.4/debian squeeze main +---------------------------- + +NOTE: You will have to populate the variable. The + is provided by LINBIT support services. + +Next you will want to add the DRBD signing key to your trusted keys. + +---------------------------- +# gpg --keyserver subkeys.pgp.net --recv-keys 0x282B6E23 +# gpg --export -a 282B6E23 | apt-key add - +---------------------------- + +Lastly perform an apt-get update so Debian recognizes the updated repo. + +---------------------------- +apt-get update +---------------------------- + +[[s-Upgrading-the-packages]] +==== Upgrading the packages + +Before you begin make sure your resources are in sync. The output of +'cat /proc/drbd' should show UpToDate/UpToDate. + +---------------------------- +bob# cat /proc/drbd + +version: 8.3.12 (api:88/proto:86-96) +GIT-hash: e2a8ef4656be026bbae540305fcb998a5991090f build by buildsystem@linbit, 2011-10-28 10:20:38 + 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- + ns:0 nr:33300 dw:33300 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 +---------------------------- + +Now that you know the resources are in sync, start by upgrading the +secondary node. This can be done manually or if you're using +Pacemaker put the node in standby mode. Both processes are covered +below. If you're running Pacemaker do not use the manual method. + +* Manual Method +---------------------------- +bob# /etc/init.d/drbd stop +---------------------------- + +* Pacemaker + +Put the secondary node into standby mode. In this example bob is secondary. + +---------------------------- +bob# crm node standby bob +---------------------------- + +NOTE: You can watch the status of your cluster using 'crm_mon -rf' or watch +'cat /proc/drbd' until it shows "Unconfigured" for your resources. + +Now update your packages with either yum or apt. + +---------------------------- +bob# yum upgrade +---------------------------- + +---------------------------- +bob# apt-get upgrade +---------------------------- + +Once the upgrade is finished will now have the latest DRBD 8.4 kernel +module and drbd-utils on your secondary node, bob. Start DRBD. + +* Manually +---------------------------- +bob# /etc/init.d/drbd start +---------------------------- + +* Pacemaker +---------------------------- +# crm node online bob +---------------------------- + +The output of 'cat /proc/drbd' on bob should show 8.4.x and look similar +to this. + +---------------------------- +version: 8.4.1 (api:1/proto:86-100) +GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by buildsystem@linbit, 2011-12-20 12:58:48 + 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- + ns:0 nr:12 dw:12 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 +---------------------------- + +NOTE: On the primary node, alice, 'cat /proc/drbd' will still show the +prior version, until you upgrade it. + +At this point the cluster has two different versions of DRBD. Stop +any service using DRBD and then DRBD on the primary node, alice, and promote +bob. Again this can be done either manually or via the Pacemaker shell. + +* Manually +---------------------------- +alice # umount /dev/drbd/by-res/r0 +alice # /etc/init.d/drbd stop +bob # drbdadm primary r0 +bob # mount /dev/drbd/by-res/r0/0 /mnt/drbd +---------------------------- +Please note that the mount command now references '/0' which defines +the volume number of a resource. See <> for +more information on the new volumes feature. + +* Pacemaker +---------------------------- +# crm node standby alice +---------------------------- + +WARNING: This will interrupt running services by stopping them and +migrating them to the secondary server, bob. + +At this point you can safely upgrade DRBD by using yum or apt. + +---------------------------- +alice# yum upgrade +---------------------------- + +---------------------------- +alice# apt-get upgrade +---------------------------- + +Once the upgrade is complete you will now have the latest version +of DRBD on alice and can start DRBD. + +* Manually +---------------------------- +alice# /etc/init.d/drbd start +---------------------------- + +* Pacemaker +---------------------------- +alice# crm node online alice +---------------------------- + +NOTE: Services will still be located on bob and will remain there +until you migrate them back. + +Both servers should now show the latest version of DRBD in a connected +state. + +---------------------------- +version: 8.4.1 (api:1/proto:86-100) +GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by buildsystem@linbit, 2011-12-20 12:58:48 + 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r----- + ns:0 nr:12 dw:12 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 +---------------------------- + +[[s-migrating_your_configs]] +==== Migrating your configs + +DRBD 8.4 is backward compatible with the 8.3 configs however some +syntax has changed. See <> for +a full list of changes. In the meantime you can port your old +configs fairly easily by using 'drbdadm dump all' command. This +will output both a new global config followed by the +new resource config files. Take this output and make changes +accordingly. + +[[s-downgrading-drbd84]] +=== Downgrading DRBD 8.4 to 8.3 + +If you're currently running DRBD 8.4 and would like to revert to 8.3 +there are several steps you will have to follow. This section assumes +you still have the 8.4 kernel module and 8.4 utilities installed. + +Stop any services accessing the DRBD resources, unmount, and demote +the devices to Secondary. Then perform the following commands. + +NOTE: These steps will have to be completed on both servers. + +---------------------------- +drbdadm down all +drbdadm apply-al all +rmmod drbd +---------------------------- + +If you're using the LINBIT repositories you can remove the packages using +`apt-get remove drbd8-utils drbd8-module-`uname -r`` or +`yum remove drbd kmod-drbd` + +Now that 8.4 is removed reinstall 8.3. You can do this either by changing +your repositories back to the 8.3 repos, or by following the steps located +http://www.drbd.org/users-guide-8.3/p-build-install-configure.html[in the +8.3 User's Guide] + +WARNING: If you migrated your configs to the 8.4 format be sure to revert +them back to the 8.3 format. See <> for the options +you need to revert. + +Once 8.3 is re-installed you can start your DRBD resources either manually +using `drbdadm` or `/etc/init.d/drbd start`. + +[[s-enable-dual-primary]] +=== Enabling dual-primary mode + +Dual-primary mode allows a resource to assume the primary role +simultaneously on both nodes. Doing so is possible on either a +permanent or a temporary basis. + +[NOTE] +=============================== +Dual-primary mode requires that the resource is configured to +replicate synchronously (protocol C). Because of this it is latency +sensitive, and ill suited for WAN environments. + +Additionally, as both resources are always primary, any interruption in the +network between nodes will result in a split-brain. +=============================== + +[[s-enable-dual-primary-permanent]] +==== Permanent dual-primary mode + +indexterm:[dual-primary mode]To enable dual-primary mode, set the +`allow-two-primaries` option to `yes` in the `net` section of your +resource configuration: + +[source,drbd] +---------------------------- +resource + net { + protocol C; + allow-two-primaries yes; + } + disk { + fencing resource-and-stonith; + } + handlers { + fence-peer "..."; + unfence-peer "..."; + } + ... +} +---------------------------- + +After that, do not forget to synchronize the configuration between nodes. Run +`drbdadm adjust ` on both nodes. + +You can now change both nodes to role primary at the same time with `drbdadm +primary `. + +CAUTION: You should always implement suitable fencing policies. +Using 'allow-two-primaries' without fencing is a very bad idea, +even worse than using single-primary without fencing. + +[[s-enable-dual-primary-temporary]] +==== Temporary dual-primary mode + +To temporarily enable dual-primary mode for a resource normally +running in a single-primary configuration, issue the following +command: + +---------------------------- +# drbdadm net-options --protocol=C --allow-two-primaries +---------------------------- + +To end temporary dual-primary mode, run the same command as above but with +`--allow-two-primaries=no` (and your desired replication protocol, if +applicable). + + +[[s-automating_promotion_on_system_startup]] +==== Automating promotion on system startup + +When a resource is configured to support dual-primary mode, it may +also be desirable to automatically switch the resource into the +primary role upon system (or DRBD) startup. + +[source,drbd] +---------------------------- +resource + startup { + become-primary-on both; + } + ... +} +---------------------------- + +The `/etc/init.d/drbd` system init script parses this option on +startup and promotes resources accordingly. + +NOTE: The `become-primary-on` approach *should be avoided*, +we recommend to use a cluster manager if at all possible. +See for example <> DRBD +configurations. In Pacemaker (or other cluster manager) +configurations, resource promotion and demotion should +always be handled by the cluster manager. + + +[[s-use-online-verify]] +=== Using on-line device verification + +[[s-online-verify-enable]] +==== Enabling on-line verification + +indexterm:[on-line device verification]<> is not enabled for resources by default. To +enable it, add the following lines to your resource configuration in +`/etc/drbd.conf`: + +[source,drbd] +---------------------------- +resource + net { + verify-alg ; + } + ... +} +---------------------------- + +__ may be any message digest algorithm supported by the +kernel crypto API in your system's kernel configuration. Normally, you +should be able to choose at least from `sha1`, `md5`, and `crc32c`. + +If you make this change to an existing resource, as always, +synchronize your `drbd.conf` to the peer, and run `drbdadm adjust +` on both nodes. + +[[s-online-verify-invoke]] +==== Invoking on-line verification + +indexterm:[on-line device verification]After you have enabled on-line +verification, you will be able to initiate a verification run using +the following command: + +---------------------------- +# drbdadm verify +---------------------------- + +When you do so, DRBD starts an online verification run for +__, and if it detects any blocks not in sync, will mark +those blocks as such and write a message to the kernel log. Any +applications using the device at that time can continue to do so +unimpeded, and you may also <> at will. + +If out-of-sync blocks were detected during the verification run, you +may resynchronize them using the following commands after verification +has completed: + +---------------------------- +# drbdadm disconnect +# drbdadm connect +---------------------------- + + +[[s-online-verify-automate]] +==== Automating on-line verification + +indexterm:[on-line device verification]Most users will want to +automate on-line device verification. This can be easily +accomplished. Create a file with the following contents, named +`/etc/cron.d/drbd-verify` on _one_ of your nodes: + +[source,drbd] +---------------------------- +42 0 * * 0 root /sbin/drbdadm verify +---------------------------- + +This will have `cron` invoke a device verification every Sunday at 42 +minutes past midnight. + +If you have enabled on-line verification for all your resources (for +example, by adding `verify-alg ` to the `common` section +in `/etc/drbd.conf`), you may also use: + +[source,drbd] +---------------------------- +42 0 * * 0 root /sbin/drbdadm verify all +---------------------------- + + +[[s-configure-sync-rate]] +=== Configuring the rate of synchronization + +indexterm:[synchronization]Normally, one tries to ensure that +background synchronization (which makes the data on the +synchronization target temporarily inconsistent) completes as quickly +as possible. However, it is also necessary to keep background +synchronization from hogging all bandwidth otherwise available for +foreground replication, which would be detrimental to application +performance. Thus, you must configure the synchronization bandwidth to +match your hardware -- which you may do in a permanent fashion or +on-the-fly. + +IMPORTANT: It does not make sense to set a synchronization rate that +is higher than the maximum write throughput on your secondary +node. You must not expect your secondary node to miraculously be able +to write faster than its I/O subsystem allows, just because it happens +to be the target of an ongoing device synchronization. + +Likewise, and for the same reasons, it does not make sense to set a +synchronization rate that is higher than the bandwidth available on +the replication network. + + +[[s-configure-sync-rate-variable]] +==== Variable sync rate configuration + +Since DRBD 8.4, the default has switched to +variable-rate synchronization. In this mode, DRBD uses an automated +control loop algorithm to determine, and permanently adjust, the +synchronization rate. This algorithm ensures that there is always +sufficient bandwidth available for foreground replication, greatly +mitigating the impact that background synchronization has on +foreground I/O. + +The optimal configuration for variable-rate synchronization may vary +greatly depending on the available network bandwidth, application I/O +pattern and link congestion. Ideal configuration settings also depend +on whether <> is in use or not. It may be +wise to engage professional consultancy in order to optimally +configure this DRBD feature. An _example_ configuration (which assumes +a deployment in conjunction with DRBD Proxy) is provided below: + +[source,drbd] +---------------------------- +resource { + disk { + c-plan-ahead 200; + c-max-rate 10M; + c-fill-target 15M; + } +} +---------------------------- + +TIP: A good starting value for `c-fill-target` is _BDP✕3_, where +BDP is your bandwidth delay product on the replication link. + + +[[s-configure-sync-rate-permanent]] +==== Permanent fixed sync rate configuration + +For testing purposes it might be useful to deactivate the dynamic resync +controller, and to configure DRBD to some fixed resynchronization speed. +That is only an upper limit, of course - if there is some bottleneck (or +just application IO), the desired speed won't be achieved. + +The maximum bandwidth a resource uses for background +re-synchronization is determined by the `rate` option +for a resource. This must be included in the resource configuration's +`disk` section in `/etc/drbd.conf`: + +[source,drbd] +---------------------------- +resource + disk { + resync-rate 40M; + ... + } + ... +} +---------------------------- + +Note that the rate setting is given in _bytes_, not _bits_ per second; the +default unit is _Kibibyte_, so a value of `4096` would be interpreted as `4MiB`. + +TIP: A good rule of thumb for this value is to use about 30% of the +available replication bandwidth. Thus, if you had an I/O subsystem +capable of sustaining write throughput of 180MB/s, and a Gigabit +Ethernet network capable of sustaining 110 MB/s network throughput +(the network being the bottleneck), you would calculate: + +[[eq-sync-rate-example1]] +.Syncer rate example, 110MB/s effective available bandwidth +image::images/sync-rate-example1.svg[] + +Thus, the recommended value for the `rate` option would be `33M`. + +By contrast, if you had an I/O subsystem with a maximum throughput of +80MB/s and a Gigabit Ethernet connection (the I/O subsystem being the +bottleneck), you would calculate: + +[[eq-sync-rate-example2]] +.Syncer rate example, 80MB/s effective available bandwidth +image::images/sync-rate-example2.svg[] + +In this case, the recommended value for the `rate` option would be +`24M`. + +[[s-configure-sync-rate-temporary]] +==== Temporary fixed sync rate configuration + +It is sometimes desirable to temporarily adjust the sync rate. For +example, you might want to speed up background re-synchronization +after having performed scheduled maintenance on one of your cluster +nodes. Or, you might want to throttle background re-synchronization if +it happens to occur at a time when your application is extremely busy +with write operations, and you want to make sure that a large portion +of the existing bandwidth is available to replication. + +For example, in order to make most bandwidth of a Gigabit Ethernet +link available to re-synchronization, issue the following command: + +---------------------------- +# drbdadm disk-options --c-plan-ahead=0 --resync-rate=110M +---------------------------- + +You need to issue this command on the _SyncTarget_ node. + +To revert this temporary setting and re-enable the synchronization +rate set in `/etc/drbd.conf`, issue this command: + +---------------------------- +# drbdadm adjust +---------------------------- + + +[[s-configure-checksum-sync]] +=== Configuring checksum-based synchronization + +indexterm:[checksum-based +synchronization]<> is +not enabled for resources by default. To enable it, add the following +lines to your resource configuration in `/etc/drbd.conf`: + +[source,drbd] +---------------------------- +resource + net { + csums-alg ; + } + ... +} +---------------------------- + +__ may be any message digest algorithm supported by the +kernel crypto API in your system's kernel configuration. Normally, you +should be able to choose at least from `sha1`, `md5`, and `crc32c`. + +If you make this change to an existing resource, as always, +synchronize your `drbd.conf` to the peer, and run `drbdadm adjust +` on both nodes. + +[[s-configure-congestion-policy]] +=== Configuring congestion policies and suspended replication + +In an environment where the replication bandwidth is highly variable +(as would be typical in WAN replication setups), the replication link +may occasionally become congested. In a default configuration, this +would cause I/O on the primary node to block, which is sometimes +undesirable. + +Instead, you may configure DRBD to _suspend_ the ongoing replication +in this case, causing the Primary's data set to _pull ahead_ of the +Secondary. In this mode, DRBD keeps the replication channel open -- it +never switches to disconnected mode -- but does not actually replicate +until sufficient bandwidth becomes available again. + +The following example is for a DRBD Proxy configuration: + +[source,drbd] +---------------------------- +resource { + net { + on-congestion pull-ahead; + congestion-fill 2G; + congestion-extents 2000; + ... + } + ... +} +---------------------------- + +It is usually wise to set both `congestion-fill` and +`congestion-extents` together with the `pull-ahead` option. + +A good value for `congestion-fill` is 90% + +* of the allocated DRBD proxy buffer memory, when replicating over + DRBD Proxy, or +* of the TCP network send buffer, in non-DRBD Proxy setups. + +A good value for `congestion-extents` is 90% of your configured +`al-extents` for the affected resources. + + +[[s-configure-io-error-behavior]] +=== Configuring I/O error handling strategies + +indexterm:[I/O errors]indexterm:[drbd.conf]DRBD's +<> is determined by the `on-io-error` option, included in the +resource `disk` configuration in `/etc/drbd.conf`: + +[source,drbd] +---------------------------- +resource { + disk { + on-io-error ; + ... + } + ... +} +---------------------------- + +You may, of course, set this in the `common` section too, if you want +to define a global I/O error handling policy for all resources. + +__ may be one of the following options: + +. `detach` +This is the default and recommended option. On the occurrence of a +lower-level I/O error, the node drops its backing device, and +continues in diskless mode. + +. `pass_on` +This causes DRBD to report the I/O error to the upper layers. On the +primary node, it is reported to the mounted file system. On the +secondary node, it is ignored (because the secondary has no upper +layer to report to). + +. `call-local-io-error` +Invokes the command defined as the local I/O error handler. This +requires that a corresponding `local-io-error` command invocation is +defined in the resource's `handlers` section. It is entirely left to +the administrator's discretion to implement I/O error handling using +the command (or script) invoked by `local-io-error`. + +NOTE: Early DRBD versions (prior to 8.0) included another option, +`panic`, which would forcibly remove the node from the cluster by way +of a kernel panic, whenever a local I/O error occurred. While that +option is no longer available, the same behavior may be mimicked via +the `local-io-error`/`call-local-io-error` interface. You should do so +only if you fully understand the implications of such behavior. + + +You may reconfigure a running resource's I/O error handling strategy +by following this process: + +* Edit the resource configuration in `/etc/drbd.d/.res`. + +* Copy the configuration to the peer node. + +* Issue `drbdadm adjust ` on both nodes. + + +[[s-configure-integrity-check]] +=== Configuring replication traffic integrity checking + +indexterm:[replication traffic integrity +checking]<> +is not enabled for resources by default. To enable it, add the +following lines to your resource configuration in `/etc/drbd.conf`: + +[source,drbd] +---------------------------- +resource + net { + data-integrity-alg ; + } + ... +} +---------------------------- + +__ may be any message digest algorithm supported by the +kernel crypto API in your system's kernel configuration. Normally, you +should be able to choose at least from `sha1`, `md5`, and `crc32c`. + +If you make this change to an existing resource, as always, +synchronize your `drbd.conf` to the peer, and run `drbdadm adjust +` on both nodes. + +[[s-resizing]] +=== Resizing resources + +[[s-growing-online]] +==== Growing on-line + +indexterm:[resource]If the backing block devices can be grown while in +operation (online), it is also possible to increase the size of a DRBD +device based on these devices during operation. To do so, two criteria +must be fulfilled: + +. The affected resource's backing device must be one managed by a + logical volume management subsystem, such as LVM. + +. The resource must currently be in the _Connected_ connection state. + +Having grown the backing block devices on both nodes, ensure that only +one node is in primary state. Then enter on one node: + +---------------------------- +# drbdadm resize +---------------------------- + +This triggers a synchronization of the new section. The +synchronization is done from the primary node to the secondary node. + +If the space you're adding is clean, you can skip syncing the additional +space by using the --assume-clean option. + +---------------------------- +# drbdadm -- --assume-clean resize +---------------------------- + +[[s-growing-offline]] +==== Growing off-line + +indexterm:[resource]When the backing block devices on both nodes are +grown while DRBD is inactive, and the DRBD resource is using +<>, then the new size is +recognized automatically. No administrative intervention is +necessary. The DRBD device will have the new size after the next +activation of DRBD on both nodes and a successful establishment of a +network connection. + +If however the DRBD resource is configured to use +<>, then this meta data must +be moved to the end of the grown device before the new size becomes +available. To do so, complete the following steps: + +WARNING: This is an advanced procedure. Use at your own discretion. + +* Unconfigure your DRBD resource: + +[source,drbd] + +---------------------------- +# drbdadm down +---------------------------- + +* Save the meta data in a text file prior to growing the backing block device: + +---------------------------- +# drbdadm dump-md > /tmp/metadata +---------------------------- + +You must do this on both nodes, using a separate dump file for every +node. _Do not_ dump the meta data on one node, and simply copy the +dump file to the peer. This will not work. + +* Grow the backing block device on both nodes. + +* Adjust the size information (`la-size-sect`) in the file + `/tmp/metadata` accordingly, on both nodes. Remember that + `la-size-sect` must be specified in sectors. + +* Re-initialize the metadata area: + +---------------------------- +# drbdadm create-md +---------------------------- + +* Re-import the corrected meta data, on both nodes: + +---------------------------- +# drbdmeta_cmd=$(drbdadm -d dump-md ) +# ${drbdmeta_cmd/dump-md/restore-md} /tmp/metadata +Valid meta-data in place, overwrite? [need to type 'yes' to confirm] +yes +Successfully restored meta data +---------------------------- + +NOTE: This example uses `bash` parameter substitution. It may or may +not work in other shells. Check your `SHELL` environment variable if +you are unsure which shell you are currently using. + +* Re-enable your DRBD resource: + +---------------------------- +# drbdadm up +---------------------------- + +* On one node, promote the DRBD resource: + +---------------------------- +# drbdadm primary +---------------------------- + +* Finally, grow the file system so it fills the extended size of the + DRBD device. + + +[[s-shrinking-online]] +==== Shrinking on-line + + +WARNING: Online shrinking is only supported with external metadata. + +indexterm:[resource]Before shrinking a DRBD device, you _must_ shrink +the layers above DRBD, i.e. usually the file system. Since DRBD cannot +ask the file system how much space it actually uses, you have to be +careful in order not to cause data loss. + +NOTE: Whether or not the _filesystem_ can be shrunk on-line depends on +the filesystem being used. Most filesystems do not support on-line +shrinking. XFS does not support shrinking at all. + +To shrink DRBD on-line, issue the following command _after_ you have +shrunk the file system residing on top of it: + +[source,drbd] +---------------------------- +# drbdadm -- --size= resize +---------------------------- + +You may use the usual multiplier suffixes for __ (K, M, G +etc.). After you have shrunk DRBD, you may also shrink the containing +block device (if it supports shrinking). + +[[s-shrinking-offline]] +==== Shrinking off-line + +indexterm:[resource]If you were to shrink a backing block device while +DRBD is inactive, DRBD would refuse to attach to this block device +during the next attach attempt, since it is now too small (in case +external meta data is used), or it would be unable to find its meta +data (in case internal meta data is used). To work around these +issues, use this procedure (if you cannot use +<>): + + +WARNING: This is an advanced procedure. Use at your own discretion. + +* Shrink the file system from one node, while DRBD is still + configured. + +* Unconfigure your DRBD resource: + +---------------------------- +# drbdadm down +---------------------------- + +* Save the meta data in a text file prior to shrinking: + +---------------------------- +# drbdadm dump-md > /tmp/metadata +---------------------------- + +You must do this on both nodes, using a separate dump file for every +node. _Do not_ dump the meta data on one node, and simply copy the dump +file to the peer. This will not work. + +* Shrink the backing block device on both nodes. + +* Adjust the size information (`la-size-sect`) in the file + `/tmp/metadata` accordingly, on both nodes. Remember that + `la-size-sect` must be specified in sectors. + +* _Only if you are using internal metadata_ (which at this time have + probably been lost due to the shrinking process), re-initialize the + metadata area: + +---------------------------- +# drbdadm create-md +---------------------------- + +* Re-import the corrected meta data, on both nodes: + +---------------------------- +# drbdmeta_cmd=$(drbdadm -d dump-md ) +# ${drbdmeta_cmd/dump-md/restore-md} /tmp/metadata +Valid meta-data in place, overwrite? [need to type 'yes' to confirm] +yes +Successfully restored meta data +---------------------------- + +NOTE: This example uses `bash` parameter substitution. It may or may not +work in other shells. Check your `SHELL` environment variable if you +are unsure which shell you are currently using. + +* Re-enable your DRBD resource: + +---------------------------- +# drbdadm up +---------------------------- + + +[[s-disable-flushes]] +=== Disabling backing device flushes + +CAUTION: You should only disable device flushes when running DRBD on +devices with a battery-backed write cache (BBWC). Most storage +controllers allow to automatically disable the write cache when the +battery is depleted, switching to write-through mode when the battery +dies. It is strongly recommended to enable such a feature. + +Disabling DRBD's flushes when running without BBWC, or on BBWC with a +depleted battery, is _likely to cause data loss_ and should not be +attempted. + +DRBD allows you to enable and disable <> separately for the replicated data set and DRBD's own +meta data. Both of these options are enabled by default. If you wish +to disable either (or both), you would set this in the `disk` section +for the DRBD configuration file, `/etc/drbd.conf`. + +To disable disk flushes for the replicated data set, include the +following line in your configuration: + +[source,drbd] +---------------------------- +resource + disk { + disk-flushes no; + ... + } + ... +} +---------------------------- + + +To disable disk flushes on DRBD's meta data, include the following +line: + +[source,drbd] +---------------------------- +resource + disk { + md-flushes no; + ... + } + ... +} +---------------------------- + +After you have modified your resource configuration (and synchronized +your `/etc/drbd.conf` between nodes, of course), you may enable these +settings by issuing this command on both nodes: + +---------------------------- +# drbdadm adjust +---------------------------- + + +[[s-configure-split-brain-behavior]] +=== Configuring split brain behavior + +[[s-split-brain-notification]] +==== Split brain notification + +DRBD invokes the `split-brain` handler, if configured, at any time +split brain is _detected_. To configure this handler, add the +following item to your resource configuration: + +---------------------------- +resource + handlers { + split-brain ; + ... + } + ... +} +---------------------------- + +__ may be any executable present on the system. + +The DRBD distribution contains a split brain handler script that +installs as `/usr/lib/drbd/notify-split-brain.sh`. It simply sends a +notification e-mail message to a specified address. To configure the +handler to send a message to `root@localhost` (which is expected to be +an email address that forwards the notification to a real system +administrator), configure the `split-brain handler` as follows: + +---------------------------- +resource + handlers { + split-brain "/usr/lib/drbd/notify-split-brain.sh root"; + ... + } + ... +} +---------------------------- + +After you have made this modification on a running resource (and +synchronized the configuration file between nodes), no additional +intervention is needed to enable the handler. DRBD will simply invoke +the newly-configured handler on the next occurrence of split brain. + +[[s-automatic-split-brain-recovery-configuration]] +==== Automatic split brain recovery policies + +CAUTION: Configuring DRBD to automatically resolve data divergence +situations resulting from split-brain (or other) scenarios +is configuring for potential *automatic data loss*. +Understand the implications, and don't do it if you don't mean to. + +TIP: You rather want to look into fencing policies, cluster manager +integration, and redundant cluster manager communication links +to *avoid* data divergence in the first place. + +In order to be able to enable and configure DRBD's automatic split +brain recovery policies, you must understand that DRBD offers several +configuration options for this purpose. DRBD applies its split brain +recovery procedures based on the number of nodes in the Primary role +at the time the split brain is detected. To that end, DRBD examines +the following keywords, all found in the resource's `net` configuration +section: + +.`after-sb-0pri` +Split brain has just been detected, but at this time the resource is +not in the Primary role on any host. For this option, DRBD understands +the following keywords: + +* `disconnect`: Do not recover automatically, simply invoke the + `split-brain` handler script (if configured), drop the connection and + continue in disconnected mode. + + +* `discard-younger-primary`: Discard and roll back the modifications + made on the host which assumed the Primary role last. + +* `discard-least-changes`: Discard and roll back the modifications on +the host where fewer changes occurred. + +* `discard-zero-changes`: If there is any host on which no changes + occurred at all, simply apply all modifications made on the other + and continue. + +.`after-sb-1pri` +Split brain has just been detected, and at this time the resource is +in the Primary role on one host. For this option, DRBD understands the +following keywords: + +* `disconnect`: As with `after-sb-0pri`, simply invoke the + `split-brain` handler script (if configured), drop the connection + and continue in disconnected mode. + +* `consensus`: Apply the same recovery policies as specified in + `after-sb-0pri`. If a split brain victim can be selected after + applying these policies, automatically resolve. Otherwise, behave + exactly as if `disconnect` were specified. + +* `call-pri-lost-after-sb`: Apply the recovery policies as specified + in `after-sb-0pri`. If a split brain victim can be selected after + applying these policies, invoke the `pri-lost-after-sb` handler on + the victim node. This handler must be configured in the + `handlers` section and is expected to forcibly remove the node from + the cluster. + +* `discard-secondary`: Whichever host is currently in the Secondary + role, make that host the split brain victim. + +.`after-sb-2pri`. +Split brain has just been detected, and at this time the resource is +in the Primary role on both hosts. This option accepts the same +keywords as `after-sb-1pri` except `discard-secondary` and `consensus`. + +NOTE: DRBD understands additional keywords for these three options, +which have been omitted here because they are very rarely used. Refer +to man page of `drbd.conf` for details on split brain recovery keywords not +discussed here. + +For example, a resource which serves as the block device for a GFS or +OCFS2 file system in dual-Primary mode may have its recovery policy +defined as follows: + +---------------------------- +resource { + handlers { + split-brain "/usr/lib/drbd/notify-split-brain.sh root" + ... + } + net { + after-sb-0pri discard-zero-changes; + after-sb-1pri discard-secondary; + after-sb-2pri disconnect; + ... + } + ... +} +---------------------------- + + +[[s-three-nodes]] +=== Creating a three-node setup + +A three-node setup involves one DRBD device _stacked_ atop another. + +[[s-stacking-considerations]] +==== Device stacking considerations + +The following considerations apply to this type of setup: + +* The stacked device is the active one. Assume you have configured one + DRBD device `/dev/drbd0`, and the stacked device atop it is + `/dev/drbd10`, then `/dev/drbd10` will be the device that you mount + and use. + +* Device meta data will be stored twice, on the underlying DRBD device + _and_ the stacked DRBD device. On the stacked device, you must always + use <>. This means that the + effectively available storage area on a stacked device is slightly + smaller, compared to an unstacked device. + +* To get the stacked upper level device running, the underlying device + must be in the primary role. + +* To be able to synchronize the backup node, the stacked device on the + active node must be up and in the primary role. + + +[[s-three-node-config]] +==== Configuring a stacked resource + +In the following example, nodes are named 'alice', 'bob', and +'charlie', with 'alice' and 'bob' forming a two-node cluster, and +'charlie' being the backup node. + +[source,drbd] +---------------------------- +resource r0 { + net { + protocol C; + } + + on alice { + device /dev/drbd0; + disk /dev/sda6; + address 10.0.0.1:7788; + meta-disk internal; + } + + on bob { + device /dev/drbd0; + disk /dev/sda6; + address 10.0.0.2:7788; + meta-disk internal; + } +} + +resource r0-U { + net { + protocol A; + } + + stacked-on-top-of r0 { + device /dev/drbd10; + address 192.168.42.1:7788; + } + + on charlie { + device /dev/drbd10; + disk /dev/hda6; + address 192.168.42.2:7788; # Public IP of the backup node + meta-disk internal; + } +} +---------------------------- + +As with any `drbd.conf` configuration file, this must be distributed +across all nodes in the cluster -- in this case, three nodes. Notice +the following extra keyword not found in an unstacked resource +configuration: + +.`stacked-on-top-of` +This option informs DRBD that the resource which contains it is a +stacked resource. It replaces one of the `on` sections normally found +in any resource configuration. Do not use `stacked-on-top-of` in an +lower-level resource. + +NOTE: It is not a requirement to use <> for +stacked resources. You may select any of DRBD's replication protocols +depending on your application. + +[[s-three-node-enable]] +==== Enabling stacked resources + +To enable a stacked resource, you first enable its lower-level +resource and promote it: +---------------------------- +drbdadm up r0 +drbdadm primary r0 +---------------------------- + +As with unstacked resources, you must create DRBD meta data on the +stacked resources. This is done using the following command: + +---------------------------- +# drbdadm create-md --stacked r0-U +---------------------------- + +Then, you may enable the stacked resource: + +--------------------------- +# drbdadm up --stacked r0-U +# drbdadm primary --stacked r0-U +--------------------------- + +After this, you may bring up the resource on the backup node, enabling +three-node replication: + +---------------------------- +# drbdadm create-md r0-U +# drbdadm up r0-U +---------------------------- + +In order to automate stacked resource management, you may integrate +stacked resources in your cluster manager configuration. See +<> for information on doing this in a +cluster managed by the Pacemaker cluster management framework. + +[[s-using-drbd-proxy]] +=== Using DRBD Proxy + +[[s-drbd-proxy-deployment-considerations]] +==== DRBD Proxy deployment considerations + +The <> processes can either be located +directly on the machines where DRBD is set up, or they can be placed +on distinct dedicated servers. A DRBD Proxy instance can serve as a +proxy for multiple DRBD devices distributed across multiple nodes. + +DRBD Proxy is completely transparent to DRBD. Typically you will +expect a high number of data packets in flight, therefore the activity +log should be reasonably large. Since this may cause longer re-sync +runs after the crash of a primary node, it is recommended to enable +DRBD's `csums-alg` setting. + +[[s-drbd-proxy-installation]] +==== Installation + +To obtain DRBD Proxy, please contact your Linbit sales +representative. Unless instructed otherwise, please always use the +most recent DRBD Proxy release. + +To install DRBD Proxy on Debian and Debian-based systems, use the dpkg +tool as follows (replace version with your DRBD Proxy version, and +architecture with your target architecture): + +---------------------------- +# dpkg -i drbd-proxy_3.0.0_amd64.deb +---------------------------- + +To install DRBD Proxy on RPM based systems (like SLES or RHEL) use +the rpm tool as follows (replace version with your DRBD Proxy version, +and architecture with your target architecture): + +---------------------------- +# rpm -i drbd-proxy-3.0-3.0.0-1.x86_64.rpm +---------------------------- + +Also install the DRBD administration program drbdadm since it is +required to configure DRBD Proxy. + +This will install the DRBD proxy binaries as well as an init script +which usually goes into `/etc/init.d`. Please always use the init +script to start/stop DRBD proxy since it also configures DRBD Proxy +using the `drbdadm` tool. + +[[s-drbd-proxy-license]] +==== License file + +When obtaining a license from Linbit, you will be sent a DRBD Proxy +license file which is required to run DRBD Proxy. The file is called +`drbd-proxy.license`, it must be copied into the `/etc` directory of the +target machines, and be owned by the user/group `drbdpxy`. + +---------------------------- +# cp drbd-proxy.license /etc/ +---------------------------- + + +[[s-drbd-proxy-configuration]] +==== Configuration + +DRBD Proxy is configured in DRBD's main configuration file. It is +configured by an additional options section called `proxy` and +additional `proxy on` sections within the host sections. + +Below is a DRBD configuration example for proxies running directly on +the DRBD nodes: + +[source,drbd] +---------------------------- +resource r0 { + net { + protocol A; + } + device minor 0; + disk /dev/sdb1; + meta-disk /dev/sdb2; + + proxy { + memlimit 100M; + plugin { + zlib level 9; + } + } + + on alice { + address 127.0.0.1:7789; + proxy on alice { + inside 127.0.0.1:7788; + outside 192.168.23.1:7788; + } + } + + on bob { + address 127.0.0.1:7789; + proxy on bob { + inside 127.0.0.1:7788; + outside 192.168.23.2:7788; + } + } +} +---------------------------- + +The `inside` IP address is used for communication between DRBD and the +DRBD Proxy, whereas the `outside` IP address is used for communication +between the proxies. + +[[s-drbd-proxy-controlling]] +==== Controlling DRBD Proxy + +`drbdadm` offers the `proxy-up` and `proxy-down` subcommands to +configure or delete the connection to the local DRBD Proxy process of +the named DRBD resource(s). These commands are used by the `start` and +`stop` actions which `/etc/init.d/drbdproxy` implements. + +The DRBD Proxy has a low level configuration tool, called +`drbd-proxy-ctl`. When called without any option it operates in +interactive mode. + +To pass a command directly, avoiding interactive mode, use +the `-c` parameter followed by the command. + +To display the available commands use: +---------------------------- +# drbd-proxy-ctl -c "help" +---------------------------- + +Note the double quotes around the command being passed. + + +[source,drbd] +---------------------------- +add connection : : + : : + Creates a communication path between two DRBD instances. + +set memlimit + Sets memlimit for connection + +del connection + Deletes communication path named name. + +show + Shows currently configured communication paths. + +show memusage + Shows memory usage of each connection. + +show [h]subconnections + Shows currently established individual connections + together with some stats. With h outputs bytes in human + readable format. + +show [h]connections + Shows currently configured connections and their states + With h outputs bytes in human readable format. + +shutdown + Shuts down the drbd-proxy program. Attention: this + unconditionally terminates any DRBD connections running. + +Examples: + drbd-proxy-ctl -c "list hconnections" + prints configured connections and their status to stdout + Note that the quotes are required. + + drbd-proxy-ctl -c "list subconnections" | cut -f 2,9,13 + prints some more detailed info about the individual connections + + watch -n 1 'drbd-proxy-ctl -c "show memusage"' + monitors memory usage. + Note that the quotes are required as listed above. + +---------------------------- + +While the commands above are only accepted from UID 0 (i.e., the `root` user), +there's one (information gathering) command that can be used by any user +(provided that unix permissions allow access on the proxy socket at +`/var/run/drbd-proxy/drbd-proxy-ctl.socket`); see the init script at +`/etc/init.d/drbdproxy` about setting the rights. + +---------------------------- +print details + This prints detailed statistics for the currently active connections. + Can be used for monitoring, as this is the only command that may be sent by a user with UID + +quit + Exits the client program (closes control connection). +---------------------------- + + +[[s-drbd-proxy-plugins]] +==== About DRBD Proxy plugins + +Since DRBD proxy 3.0 the proxy allows to enable a few specific +plugins for the WAN connection. + +The currently available plugins are `zlib` and +`lzma`. + +The `zlib` plugin uses the GZIP algorithm for compression. +The advantage is fairly low CPU usage. + +The `lzma` plugin uses the liblzma2 library. It can +use dictionaries of several hundred MiB; these allow for very +efficient delta-compression of repeated data, even for small changes. +`lzma` needs much more CPU and memory, but results in much better +compression than `zlib`. The `lzma` plugin has to be enabled in your license. + +Please contact Linbit to find the best settings for your environment - it +depends on the CPU (speed, threading count), memory, input and +the available output bandwidth. + +Please note that the older `compression on` in the +`proxy` section is deprecated, and will be removed in +a future release. + +Currently it is treated as `zlib level 9`. + + +[[s-drbd-proxy-bwlimit]] +==== Using a WAN Side Bandwidth Limit + +The experimental `bwlimit` option of DRBD Proxy is broken. Do not use +it, as it may cause applications on DRBD to block on IO. It will +be removed. + +Instead use the Linux kernel's traffic control framework to +limit bandwidth consumed by proxy on the WAN side. + +In the following example you would need to replace the interface +name, the source port and the ip address of the peer. + +---------------------------- +# tc qdisc add dev eth0 root handle 1: htb default 1 +# tc class add dev eth0 parent 1: classid 1:1 htb rate 1gbit +# tc class add dev eth0 parent 1:1 classid 1:10 htb rate 500kbit +# tc filter add dev eth0 parent 1: protocol ip prio 16 u32 \ + match ip sport 7000 0xffff \ + match ip dst 192.168.47.11 flowid 1:10 +# tc filter add dev eth0 parent 1: protocol ip prio 16 u32 \ + match ip dport 7000 0xffff \ + match ip dst 192.168.47.11 flowid 1:10 +---------------------------- + +You can remove this bandwidth limitation with + +---------------------------- +# tc qdisc del dev eth0 root handle 1 +---------------------------- + +[[s-drbd-proxy-troubleshoot]] +==== Troubleshooting + +DRBD proxy logs via syslog using the `LOG_DAEMON` facility. Usually +you will find DRBD Proxy messages in `/var/log/daemon.log`. + +Enabling debug mode in DRBD Proxy can be done with the following command. + +-------------------------- +# drbd-proxy-ctl -c 'set loglevel debug' +-------------------------- + +For example, if proxy fails to connect it will log something like +"Rejecting connection because I can't connect on the other side". In +that case, please check if DRBD is running (not in StandAlone mode) on +both nodes and if both proxies are running. Also double-check your +configuration. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/benchmark.adoc drbd-doc-8.4~20220106/UG8.4/en/benchmark.adoc --- drbd-doc-8.4~20151102/UG8.4/en/benchmark.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/benchmark.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,89 @@ +[[ch-benchmark]] +== Measuring block device performance + +[[s-measure-throughput]] +=== Measuring throughput + +When measuring the impact of using DRBD on a system's I/O throughput, +the _absolute_ throughput the system is capable of is of little +relevance. What is much more interesting is the _relative_ impact DRBD +has on I/O performance. Thus it is always necessary to measure I/O +throughput both with and without DRBD. + +CAUTION: The tests described in this section are intrusive; they +overwrite data and bring DRBD devices out of sync. It is thus vital +that you perform them only on scratch volumes which can be discarded +after testing has completed. + +I/O throughput estimation works by writing significantly large chunks +of data to a block device, and measuring the amount of time the system +took to complete the write operation. This can be easily done using a +fairly ubiquitous utility, `dd`, whose reasonably recent versions +include a built-in throughput estimation. + +A simple ``dd``-based throughput benchmark, assuming you have a scratch +resource named `test` which is currently connected and in the +secondary role on both nodes, is one like the following: + +[source,drbd] +---------------------------- +# TEST_RESOURCE=test +# TEST_DEVICE=$(drbdadm sh-dev $TEST_RESOURCE) +# TEST_LL_DEVICE=$(drbdadm sh-ll-dev $TEST_RESOURCE) +# drbdadm primary $TEST_RESOURCE +# for i in $(seq 5); do + dd if=/dev/zero of=$TEST_DEVICE bs=512M count=1 oflag=direct + done +# drbdadm down $TEST_RESOURCE +# for i in $(seq 5); do + dd if=/dev/zero of=$TEST_LL_DEVICE bs=512M count=1 oflag=direct + done +---------------------------- + +This test simply writes a 512M chunk of data to your DRBD device, and +then to its backing device for comparison. Both tests are repeated 5 +times each to allow for some statistical averaging. The relevant +result is the throughput measurements generated by `dd`. + +NOTE: For freshly enabled DRBD devices, it is normal to see +significantly reduced performance on the first `dd` run. This is due +to the Activity Log being "cold", and is no cause for concern. + +[[s-measure-latency]] +=== Measuring latency + +Latency measurements have objectives completely different from +throughput benchmarks: in I/O latency tests, one writes a very small +chunk of data (ideally the smallest chunk of data that the system can +deal with), and observes the time it takes to complete that write. The +process is usually repeated several times to account for normal +statistical fluctuations. + +Just as throughput measurements, I/O latency measurements may be +performed using the ubiquitous `dd` utility, albeit with different +settings and an entirely different focus of observation. + +Provided below is a simple ``dd``-based latency micro-benchmark, +assuming you have a scratch resource named `test` which is currently +connected and in the secondary role on both nodes: + +[source,drbd] +---------------------------- +# TEST_RESOURCE=test +# TEST_DEVICE=$(drbdadm sh-dev $TEST_RESOURCE) +# TEST_LL_DEVICE=$(drbdadm sh-ll-dev $TEST_RESOURCE) +# drbdadm primary $TEST_RESOURCE +# dd if=/dev/zero of=$TEST_DEVICE bs=512 count=1000 oflag=direct +# drbdadm down $TEST_RESOURCE +# dd if=/dev/zero of=$TEST_LL_DEVICE bs=512 count=1000 oflag=direct +---------------------------- + +This test writes 1,000 512-byte chunks of data to your DRBD device, +and then to its backing device for comparison. 512 bytes is the +smallest block size a Linux system (on all architectures except s390) +is expected to handle. + +It is important to understand that throughput measurements generated +by `dd` are completely irrelevant for this test; what is important is +the _time_ elapsed during the completion of said 1,000 writes. Dividing +this time by 1,000 gives the average latency of a single sector write. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/build-install-from-source.adoc drbd-doc-8.4~20220106/UG8.4/en/build-install-from-source.adoc --- drbd-doc-8.4~20151102/UG8.4/en/build-install-from-source.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/build-install-from-source.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,499 @@ +[[ch-build-install-from-source]] +== Building and installing DRBD from source + +[[s-downloading-drbd-sources]] +=== Downloading the DRBD sources + +The source tarballs for both current and historic DRBD releases are +available for download from http://oss.linbit.com/drbd/. Source +tarballs, by convention, are named `drbd-x.y.z.tar.gz`, where x, y and +z refer to the major, minor and bugfix release numbers. + +DRBD's compressed source archive is less than half a megabyte in +size. To download and uncompress into your current working directory, +issue the following commands: + +------------------------------------- +$ wget http://oss.linbit.com/drbd/8.4/drbd-latest.tar.gz +$ tar -xzf drbd-latest.tar.gz +------------------------------------- + +NOTE: The use of `wget` for downloading the source tarball is purely +an example. Of course, you may use any downloader you prefer. + +It is recommended to uncompress DRBD into a directory normally used +for keeping source code, such as `/usr/src` or `/usr/local/src`. The +examples in this guide assume `/usr/src`. + +[[s-checking-out-git]] +=== Checking out sources from the public DRBD source repository + +DRBD's source code is kept in a public http://git.or.cz[Git] +repository, which may be browsed on-line at http://git.drbd.org/. To +check out a specific DRBD release from the repository, you must first +_clone_ your preferred DRBD branch. In this example, you would clone +from the DRBD 8.4 branch: + +------------------------------------- +$ git clone git://git.drbd.org/drbd-8.4.git +------------------------------------- + +If your firewall does not permit TCP connections to port 9418, you may +also check out via HTTP (please note that using Git via HTTP is much +slower than its native protocol, so native Git is usually preferred +whenever possible): + +------------------------------------- +$ git clone http://git.drbd.org/drbd-8.4.git +------------------------------------- + +Either command will create a Git checkout subdirectory, named +`drbd-8.4`. To now move to a source code state equivalent to a +specific DRBD release, issue the following commands: + +------------------------------------- +$ cd drbd-8.4 +$ git checkout drbd-8.4. +------------------------------------- + +... where __ refers to the DRBD point release you wish to build. + +The checkout directory will now contain the equivalent of an unpacked +DRBD source tarball of a that specific version, enabling you to build +DRBD from source. + +There are actually two minor differences between an unpacked source +tarball and a Git checkout of the same release: + +* The Git checkout contains a `debian/` subdirectoy, while the source + tarball does not. This is due to a request from Debian maintainers, + who prefer to add their own Debian build configuration to a pristine + upstream tarball. + + +* The source tarball contains preprocessed man pages, the Git checkout + does not. Thus, building DRBD from a Git checkout requires a + complete Docbook toolchain for building the man pages, while this is + not a requirement for building from a source tarball. + +[[s-build-from-source]] +=== Building DRBD from source + +[[s-build-prereq]] +==== Checking build prerequisites + +Before being able to build DRBD from source, your build host must +fulfill the following prerequisites: + +* `make`, `gcc`, the glibc development libraries, and the `flex` scanner + generator must be installed. + +NOTE: You should make sure that the `gcc` you use to compile the +module is the same which was used to build the kernel you are +running. If you have multiple `gcc` versions available on your system, +DRBD's build system includes a facility to select a specific `gcc` version. + +* For building directly from a git checkout, GNU Autoconf is also + required. This requirement does not apply when building from a + tarball. + +* If you are running a stock kernel supplied by your distribution, you + should install a matching precompiled kernel headers package. These + are typically named `kernel-dev`, `kernel-headers`, `linux-headers` or + similar. In this case, you can skip <> + and continue with <>. + +* If you are not running a distribution stock kernel (i.e. your system + runs on a kernel built from source with a custom configuration), + your kernel source files must be installed. Your distribution may + provide for this via its package installation mechanism; + distribution packages for kernel sources are typically named + `kernel-source` or similar. + +NOTE: On RPM-based systems, these packages will be named similar to +`kernel-source-version.rpm`, which is easily confused with +`kernel-version.src.rpm`. The former is the correct package to +install for building DRBD. + +"Vanilla" kernel tarballs from the kernel.org archive are simply named +linux-version-tar.bz2 and should be unpacked in +`/usr/src/linux-version`, with the symlink `/usr/src/linux` pointing +to that directory. + +In this case of building DRBD against kernel sources (not headers), +you must continue with <>. + +[[s-build-prepare-kernel-tree]] +==== Preparing the kernel source tree + +To prepare your source tree for building DRBD, you must first enter +the directory where your unpacked kernel sources are +located. Typically this is `/usr/src/linux-version`, or simply a +symbolic link named `/usr/src/linux`: + +------------------------------------- +$ cd /usr/src/linux +------------------------------------- + +The next step is recommended, though not strictly necessary. Be sure +to copy your existing `.config` file to a safe location before +performing it. This step essentially reverts your kernel source tree +to its original state, removing any leftovers from an earlier build or +configure run: + +------------------------------------- +$ make mrproper +------------------------------------- + +Now it is time to _clone_ your currently running kernel configuration +into the kernel source tree. There are a few possible options for +doing this: + +* Many reasonably recent kernel builds export the currently-running + configuration, in compressed form, via the `/proc` filesystem, + enabling you to copy from there: + +------------------------------------- +$ zcat /proc/config.gz > .config +------------------------------------- + +* SUSE kernel Makefiles include a `cloneconfig` target, so on those + systems, you can issue: + +------------------------------------- +$ make cloneconfig +------------------------------------- + +* Some installs put a copy of the kernel config into `/boot`, which + allows you to do this: + +------------------------------------- +$ cp /boot/config-`uname -r` .config +------------------------------------- + +* Finally, you may simply use a backup copy of a `.config` file which + you know to have been used for building the currently-running + kernel. + +[[s-build-prepare-checkout]] +==== Preparing the DRBD build tree + +Any DRBD compilation requires that you first configure your DRBD +source tree with the included `configure` script. + +NOTE: When building from a git checkout, the `configure` script does +not yet exist. You must create it by simply typing `autoconf` at the +top of the checkout. + +Invoking the configure script with the `--help` option returns a full +list of supported options. The table below summarizes the most +important ones: + +[[t-configure-options]] +.Options supported by DRBD's `configure` script +[format="csv",separator=";",options="header"] +|=================================== +Option;Description;Default;Remarks ++--prefix+;Installation directory prefix;`/usr/local`;This is the default to maintain Filesystem Hierarchy Standard compatibility for locally installed, unpackaged software. In packaging, this is typically overridden with `/usr`. ++--localstatedir+;Local state directory;`/usr/local/var`;Even with a default `prefix`, most users will want to override this with `/var`. ++--sysconfdir+;System configuration directory;`/usr/local/etc`;Even with a default `prefix`, most users will want to override this with `/etc`. ++--with-km+;Build the DRBD kernel module;no;Enable this option when you are building a DRBD kernel module. ++--with-utils+;Build the DRBD userland utilities;yes;Disable this option when you are building a DRBD kernel module against a new kernel version, and not upgrading DRBD at the same time. ++--with-heartbeat+;Build DRBD Heartbeat integration;yes;You may disable this option unless you are planning to use DRBD's Heartbeat v1 resource agent or `dopd`. ++--with-pacemaker+;Build DRBD Pacemaker integration;yes;You may disable this option if you are not planning to use the Pacemaker cluster resource manager. ++--with-rgmanager+;Build DRBD Red Hat Cluster Suite integration;no;You should enable this option if you are planning to use DRBD with rgmanager, the Red Hat Cluster Suite cluster resource manager. Please note that you will need to pass `--with rgmanager` to `rpmbuild` to actually get the rgmanager-package built. ++--with-xen+;Build DRBD Xen integration;yes (on x86 architectures);You may disable this option if you are not planning to use the `block-drbd` helper script for Xen integration. ++--with-bashcompletion+;Build programmable bash completion for `drbdadm`;yes;You may disable this option if you are using a shell other than bash, or if you do not want to utilize programmable completion for the `drbdadm` command. ++--enable-spec+;Create a distribution specific RPM spec file;no;For package builders only: you may use this option if you want to create an RPM spec file adapted to your distribution. See also <>. +|=================================== + +Most users will want the following configuration options: + +------------------------------------- +$ ./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-km +------------------------------------- + + +The configure script will adapt your DRBD build to distribution +specific needs. It does so by auto-detecting which distribution it is +being invoked on, and setting defaults accordingly. When overriding +defaults, do so with caution. + +The configure script creates a log file, `config.log`, in the +directory where it was invoked. When reporting build issues on the +mailing list, it is usually wise to either attach a copy of that file +to your email, or point others to a location from where it may be +viewed or downloaded. + +[[s-build-userland]] +==== Building DRBD userspace utilities + +Building userspace utilities requires that you +<>, which is enabled by default. + +To build DRBD's userspace utilities, invoke the following commands +from the top of your DRBD checkout or expanded tarball: + +------------------------------------- +$ make +$ sudo make install +------------------------------------- + +This will build the management utilities (`drbdadm`, `drbdsetup`, and +`drbdmeta`), and install them in the appropriate locations. Based on +the other `--with` options selected during the +<>, it will also install +scripts to integrate DRBD with other applications. + +[[s-build-compile-kernel-module]] +==== Compiling DRBD as a kernel module + +Building the DRBD kernel module requires that you +<>, which is disabled by default. + +[[s-build-against-running-kernel]] +===== Building DRBD for the currently-running kernel + +After changing into your unpacked DRBD sources directory, you should +now change into the kernel module subdirectory, simply named `drbd`, +and build the module there: + +------------------------------------- +$ cd drbd +$ make clean all +------------------------------------- + +This will build the DRBD kernel module to match your currently-running +kernel, whose kernel source is expected to be accessible via the +`/lib/modules/\`uname -r`/build` symlink. + +[[s-build-against-kernel-headers]] +===== Building against precompiled kernel headers + +If the `/lib/modules/\`uname -r`/build` symlink does not exist, and you +are building against a running stock kernel (one that was shipped +pre-compiled with your distribution), you may also set the KDIR +variable to point to the _matching_ kernel headers (as opposed to +kernel sources) directory. Note that besides the actual kernel headers, +commonly found in `/usr/src/linux-version/include`, the +DRBD build process also looks for the kernel Makefile and +configuration file (`.config`), which pre-built kernel headers +packages commonly include. + +To build against precompiled kernel headers, issue, for example: + +------------------------------------- +$ cd drbd +$ make clean +$ make KDIR=/lib/modules/2.6.38/build +------------------------------------- + + +[[s-build-against-source-tree]] +===== Building against a kernel source tree + +If you are building DRBD against a kernel _other_ than your currently +running one, and you do not have precompiled kernel sources for your +target kernel available, you need to build DRBD against a complete +target kernel source tree. To do so, set the KDIR variable to point to +the kernel sources directory: + +------------------------------------- +$ cd drbd +$ make clean +$ make KDIR=/path/to/kernel/source +------------------------------------- + +[[s-build-customcc]] +===== Using a non-default C compiler + +You also have the option of setting the compiler explicitly via the CC +variable. This is known to be necessary on some Fedora versions, for +example: + +------------------------------------- +$ cd drbd +$ make clean +$ make CC=gcc32 +------------------------------------- + +[[s-build-modinfo]] +===== Checking for successful build completion + +If the module build completes successfully, you should see a kernel +module file named `drbd.ko` in the `drbd` directory. You may +interrogate the newly-built module with `/sbin/modinfo drbd.ko` if you +are so inclined. + + +///////////////////////////////////// +[[s-build-install]] +=== Installing DRBD + +Provided your DRBD build completed successfully, you will be able to +install DRBD by issuing these commands: + +------------------------------------- +$ cd /usr/src/drbd-x.y.z +$ sudo make install +------------------------------------- + +The DRBD userspace management tools (`drbdadm`, `drbdsetup`, and +`drbdmeta`) will now be installed in +/sbin+. + +Note that any kernel upgrade will require you to rebuild and reinstall +the DRBD kernel module to match the new kernel. See <> for configure +options that may speed up the process. + +The DRBD userspace tools, in contrast, need only be rebuilt +and reinstalled when upgrading to a new DRBD version. If at any +time you upgrade to a new kernel _and_ new DRBD +version, you will need to upgrade both components. + +///////////////////////////////////// + +[[s-build-rpm]] +=== Building a DRBD RPM package + +The DRBD build system contains a facility to build RPM packages +directly out of the DRBD source tree. For building RPMs, +<> applies essentially in the same way as for building +and installing with `make`, except that you also need the RPM build +tools, of course. + +Also, see <> if you are not building +against a running kernel with precompiled headers available. + +The build system offers two approaches for building RPMs. The simpler +approach is to simply invoke the `rpm` target in the top-level +Makefile: + +------------------------------------- +$ ./configure +$ make rpm +$ make km-rpm +------------------------------------- + +This approach will auto-generate spec files from pre-defined +templates, and then use those spec files to build binary RPM packages. + +The `make rpm` approach generates a number of RPM packages: + +[[t-rpm-packages]] +.DRBD userland RPM packages +[format="csv",separator=";",options="header"] +|=================================== +Package name;Description;Dependencies;Remarks ++drbd+;DRBD meta-package;All other `drbd-*` packages;Top-level virtual package. When installed, this pulls in all other userland packages as dependencies. ++drbd-utils+;Binary administration utilities;;Required for any DRBD enabled host ++drbd-udev+;udev integration facility;`drbd-utils`, `udev`;Enables udev to manage user-friendly symlinks to DRBD devices ++drbd-xen+;Xen DRBD helper scripts;`drbd-utils`, `xen`;Enables xend to auto-manage DRBD resources ++drbd-heartbeat+;DRBD Heartbeat integration scripts;`drbd-utils`, `heartbeat`;Enables DRBD management by legacy v1-style Heartbeat clusters ++drbd-pacemaker+;DRBD Pacemaker integration scripts;`drbd-utils`, `pacemaker`;Enables DRBD management by Pacemaker clusters ++drbd-rgmanager+;DRBD Red Hat Cluster Suite integration scripts;`drbd-utils`, `rgmanager`;Enables DRBD management by rgmanager, the Red Hat Cluster Suite resource manager ++drbd-bashcompletion+;Programmable bash completion;`drbd-utils`, `bash-completion`;Enables Programmable bash completion for the `drbdadm` utility +|=================================== + +The other, more flexible approach is to have `configure` generate the +spec file, make any changes you deem necessary, and then use the +`rpmbuild` command: + +------------------------------------- +$ ./configure --enable-spec +$ make tgz +$ cp drbd*.tar.gz `rpm -E %sourcedir` +$ rpmbuild -bb drbd.spec +------------------------------------- + +If you are about to build RPMs for both the DRBD userspace utilities +and the kernel module, use: + +------------------------------------- +$ ./configure --enable-spec --with-km +$ make tgz +$ cp drbd*.tar.gz `rpm -E %sourcedir` +$ rpmbuild -bb drbd.spec +$ rpmbuild -bb drbd-kernel.spec +------------------------------------- + +The RPMs will be created wherever your system RPM configuration (or +your personal `~/.rpmmacros` configuration) dictates. + +After you have created these packages, you can install, upgrade, and +uninstall them as you would any other RPM package in your system. + +Note that any kernel upgrade will require you to generate a new +`drbd-km` package to match the new kernel. + +The DRBD userland packages, in contrast, need only be recreated when +upgrading to a new DRBD version. If at any time you upgrade to a new +kernel _and_ new DRBD version, you will need to upgrade both packages. + +[[s-build-deb]] +=== Building a DRBD Debian package + +The DRBD build system contains a facility to build Debian packages +directly out of the DRBD source tree. For building Debian packages, +<> applies essentially in the same way as for building +and installing with `make`, except that you of course also need the +`dpkg-dev` package containing the Debian packaging tools, and +`fakeroot` if you want to build DRBD as a non-root user (highly +recommended). + +Also, see <> if you are not building +against a running kernel with precompiled headers available. + +The DRBD source tree includes a `debian` subdirectory containing the +required files for Debian packaging. That subdirectory, however, is +not included in the DRBD source tarballs -- instead, you will +need to <> of a _tag_ +associated with a specific DRBD release. + +Once you have created your checkout in this fashion, you can issue the +following commands to build DRBD Debian packages: + +------------------------------------- +$ dpkg-buildpackage -rfakeroot -b -uc +------------------------------------- + +NOTE: This (example) `drbd-buildpackage` invocation enables a +binary-only build (`-b`) by a non-root user (`-rfakeroot`), +disabling cryptographic signature for the changes file (`-uc`). Of +course, you may prefer other build options, see the +`dpkg-buildpackage` man page for details. + +This build process will create two Debian packages: + +* A package containing the DRBD userspace tools, named + `drbd8-utils_x.y.z-BUILD_ARCH.deb`; + +* A module source package suitable for `module-assistant` named + `drbd8-module-source_x.y.z-BUILD_all.deb`. + +After you have created these packages, you can install, upgrade, and +uninstall them as you would any other Debian package in your system. + +Building and installing the actual kernel module from the installed +module source package is easily accomplished via Debian's +`module-assistant` facility: + +------------------------------------- +# module-assistant auto-install drbd8 +------------------------------------- + +You may also use the shorthand form of +the above command: + +------------------------------------- +# m-a a-i drbd8 +------------------------------------- + +Note that any kernel upgrade will require you to rebuild the kernel +module (with `module-assistant`, as just described) to match the new +kernel. The `drbd8-utils` and `drbd8-module-source` packages, in +contrast, only need to be recreated when upgrading to a new DRBD +version. If at any time you upgrade to a new kernel _and_ new DRBD +version, you will need to upgrade both packages. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/configure.adoc drbd-doc-8.4~20220106/UG8.4/en/configure.adoc --- drbd-doc-8.4~20151102/UG8.4/en/configure.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/configure.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,438 @@ +[[ch-configure]] +== Configuring DRBD + +[[s-prepare-storage]] +=== Preparing your lower-level storage + +After you have installed DRBD, you must set aside a roughly +identically sized storage area on both cluster nodes. This will +become the _lower-level device_ for your DRBD +resource. You may use any type of block device found on your +system for this purpose. Typical examples include: + +* A hard drive partition (or a full physical hard drive), + +* a software RAID device, + +* an LVM Logical Volume or any other block device configured by the + Linux device-mapper infrastructure, + +* any other block device type found on your system. + +You may also use _resource stacking_, meaning you can use one DRBD +device as a lower-level device for another. Some specific +considerations apply to stacked resources; their configuration is +covered in detail in <>. + +NOTE: While it is possible to use loop devices as lower-level devices +for DRBD, doing so is not recommended due to deadlock issues. + +It is _not_ necessary for this storage area to be empty before you +create a DRBD resource from it. In fact it is a common use case to +create a two-node cluster from a previously non-redundant +single-server system using DRBD (some caveats apply -- please refer to +<> if you are planning to do this). + +For the purposes of this guide, we assume a very simple setup: + +* Both hosts have a free (currently unused) partition named + `/dev/sda7`. + +* We are using <>. + +[[s-prepare-network]] +=== Preparing your network configuration + +It is recommended, though not strictly required, that you run your +DRBD replication over a dedicated connection. At the time of this +writing, the most reasonable choice for this is a direct, +back-to-back, Gigabit Ethernet connection. When DRBD is run +over switches, use of redundant components and the `bonding` driver +(in `active-backup` mode) is recommended. + +It is generally not recommended to run DRBD replication via routers, +for reasons of fairly obvious performance drawbacks (adversely +affecting both throughput and latency). + +In terms of local firewall considerations, it is important to +understand that DRBD (by convention) uses TCP ports from 7788 upwards, +with every resource listening on a separate port. DRBD uses _two_ +TCP connections for every resource configured. For proper DRBD +functionality, it is required that these connections are allowed by +your firewall configuration. + +Security considerations other than firewalling may also apply if a +Mandatory Access Control (MAC) scheme such as SELinux or AppArmor is +enabled. You may have to adjust your local security policy so it does +not keep DRBD from functioning properly. + +You must, of course, also ensure that the TCP ports +for DRBD are not already used by another application. + +It is not possible to configure a DRBD resource to support more than +one TCP connection. If you want to provide for DRBD connection +load-balancing or redundancy, you can easily do so at the Ethernet +level (again, using the `bonding` driver). + +For the purposes of this guide, we assume a +very simple setup: + +* Our two DRBD hosts each have a currently unused network interface, + `eth1`, with IP addresses `10.1.1.31` and `10.1.1.32` assigned to it, + respectively. + +* No other services are using TCP ports 7788 through 7799 on either + host. + +* The local firewall configuration allows both inbound and outbound + TCP connections between the hosts over these ports. + + +[[s-configure-resource]] +=== Configuring your resource + +All aspects of DRBD are controlled in its configuration file, +`/etc/drbd.conf`. Normally, this configuration file is just a skeleton +with the following contents: + +------------------------------------- +include "/etc/drbd.d/global_common.conf"; +include "/etc/drbd.d/*.res"; +------------------------------------- + +By convention, `/etc/drbd.d/global_common.conf` contains the +<> and <> +sections of the DRBD configuration, whereas the `.res` files contain +one <> section each. + +It is also possible to use `drbd.conf` as a flat configuration file +without any `include` statements at all. Such a configuration, +however, quickly becomes cluttered and hard to manage, which is why +the multiple-file approach is the preferred one. + +Regardless of which approach you employ, you should always make sure +that `drbd.conf`, and any other files it includes, are _exactly +identical_ on all participating cluster nodes. + +The DRBD source tarball contains an example configuration file in the +`scripts` subdirectory. Binary installation packages will either +install this example configuration directly in `/etc`, or in a +package-specific documentation directory such as +`/usr/share/doc/packages/drbd`. + +This section describes only those few aspects of the configuration +file which are absolutely necessary to understand in order to get DRBD +up and running. The configuration file's syntax and contents are +documented in great detail in the man page of `drbd.conf`. + + +[[s-drbdconf-example]] +==== Example configuration + +For the purposes of this guide, we assume a +minimal setup in line with the examples given in the +previous sections: + +.Simple DRBD configuration (`/etc/drbd.d/global_common.conf`) +------------------------------------- +global { + usage-count yes; +} +common { + net { + protocol C; + } +} +------------------------------------- + +.Simple DRBD resource configuration (`/etc/drbd.d/r0.res`) +------------------------------------- +resource r0 { + on alice { + device /dev/drbd1; + disk /dev/sda7; + address 10.1.1.31:7789; + meta-disk internal; + } + on bob { + device /dev/drbd1; + disk /dev/sda7; + address 10.1.1.32:7789; + meta-disk internal; + } +} +------------------------------------- + +This example configures DRBD in the following fashion: + +* You "opt in" to be included in DRBD's usage statistics (see + <>). + +* Resources are configured to use fully synchronous replication + (<>) unless explicitly specified + otherwise. + +* Our cluster consists of two nodes, 'alice' and 'bob'. + +* We have a resource arbitrarily named `r0` which uses `/dev/sda7` as + the lower-level device, and is configured with + <>. + +* The resource uses TCP port 7789 for its network connections, and + binds to the IP addresses 10.1.1.31 and 10.1.1.32, respectively. + +The configuration above implicitly creates one volume in the +resource, numbered zero (`0`). For multiple volumes in one resource, +modify the syntax as follows: + +.Multi-volume DRBD resource configuration (`/etc/drbd.d/r0.res`) +------------------------------------- +resource r0 { + volume 0 { + device /dev/drbd1; + disk /dev/sda7; + meta-disk internal; + } + volume 1 { + device /dev/drbd2; + disk /dev/sda8; + meta-disk internal; + } + on alice { + address 10.1.1.31:7789; + } + on bob { + address 10.1.1.32:7789; + } +} +------------------------------------- + +NOTE: Volumes may also be added to existing resources on the fly. For +an example see <>. + +[[s-drbdconf-global]] +==== The `global` section + +This section is allowed only once in the configuration. It is normally +in the `/etc/drbd.d/global_common.conf` file. In a single-file +configuration, it should go to the very top of the configuration +file. Of the few options available in this section, only one is of +relevance to most users: + +[[fp-usage-count]] +.`usage-count` +The DRBD project keeps statistics about the usage of various DRBD +versions. This is done by contacting an HTTP server every time a new +DRBD version is installed on a system. This can be disabled by setting +`usage-count no;`. The default is `usage-count ask;` which will +prompt you every time you upgrade DRBD. + +DRBD's usage statistics are, of course, publicly available: see +http://usage.drbd.org. + + +[[s-drbdconf-common]] +==== The `common` section + +This section provides a shorthand method to define configuration +settings inherited by every resource. It is normally found in +`/etc/drbd.d/global_common.conf`. You may define any option you can +also define on a per-resource basis. + +Including a `common` section is not strictly required, but strongly +recommended if you are using more than one resource. Otherwise, the +configuration quickly becomes convoluted by repeatedly-used options. + +In the example above, we included `net { protocol C; }` in the +`common` section, so every resource configured (including `r0`) +inherits this option unless it has another `protocol` option +configured explicitly. For other synchronization protocols available, +see <>. + +[[s-drbdconf-resource]] +==== The `resource` sections + +A per-resource configuration file is usually named +`/etc/drbd.d/.res`. Any DRBD resource you define must be +named by specifying resource name in the configuration. You may use +any arbitrary identifier, however the name must not contain characters +other than those found in the US-ASCII character set, and must also +not include whitespace. + +Every resource configuration must also have two `on ` sub-sections +(one for every cluster node). All other configuration settings are +either inherited from the `common` section (if it exists), or derived +from DRBD's default settings. + +In addition, options with equal values on both hosts +can be specified directly in the `resource` section. Thus, we can +further condense our example configuration as follows: + +------------------------------------- +resource r0 { + device /dev/drbd1; + disk /dev/sda7; + meta-disk internal; + on alice { + address 10.1.1.31:7789; + } + on bob { + address 10.1.1.32:7789; + } +} +------------------------------------- + + +[[s-first-time-up]] +=== Enabling your resource for the first time + +After you have completed initial resource configuration as outlined in +the previous sections, you can bring up your resource. + +Each of the following steps must be completed on both nodes. + +Please note that with our example config snippets (`resource r0 { ... }`), `` would be `r0`. + +.Create device metadata +This step must be completed only on initial device +creation. It initializes DRBD's metadata: +------------------------------------- +# drbdadm create-md +v08 Magic number not found +Writing meta data... +initialising activity log +NOT initializing bitmap +New drbd meta data block successfully created. +------------------------------------- + +.Enable the resource +This step associates the resource with its backing device (or devices, +in case of a multi-volume resource), sets replication parameters, and +connects the resource to its peer: +------------------------------------- +# drbdadm up +------------------------------------- + +.Observe `/proc/drbd` +DRBD's virtual status file in the `/proc` filesystem, `/proc/drbd`, +should now contain information similar to the following: + +------------------------------------- +# cat /proc/drbd +version: 8.4.1 (api:1/proto:86-100) +GIT-hash: 91b4c048c1a0e06777b5f65d312b38d47abaea80 build by buildsystem@linbit, 2011-12-20 12:58:48 + 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r----- + ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:524236 +------------------------------------- + +NOTE: The __Inconsistent__/__Inconsistent__ disk state is expected at this +point. + +By now, DRBD has successfully allocated both disk and network +resources and is ready for operation. What it does not know yet is +which of your nodes should be used as the source of the initial device +synchronization. + +[[s-initial-full-sync]] +=== The initial device synchronization + +There are two more steps required for DRBD to become fully +operational: + +.Select an initial sync source +If you are dealing with newly-initialized, empty disk, this choice is +entirely arbitrary. If one of your nodes already has valuable data +that you need to preserve, however, _it is of crucial importance_ that +you select that node as your synchronization source. If you do +initial device synchronization in the wrong direction, you will lose +that data. Exercise caution. + + +.Start the initial full synchronization +This step must be performed on only one node, only on initial resource +configuration, and only on the node you selected as the +synchronization source. To perform this step, issue this command: + +------------------------------------- +# drbdadm primary --force +------------------------------------- + +After issuing this command, the initial full synchronization will +commence. You will be able to monitor its progress via +`/proc/drbd`. It may take some time depending on the size of the +device. + +By now, your DRBD device is fully operational, even before the initial +synchronization has completed (albeit with slightly reduced +performance). You may now create a filesystem on the device, use it as +a raw block device, mount it, and perform any other operation you +would with an accessible block device. + +You will now probably want to continue with <>, which +describes common administrative tasks to perform on your resource. + +[[s-using-truck-based-replication]] +=== Using truck based replication + +In order to preseed a remote node with data which is then to be kept +synchronized, and to skip the initial device synchronization, follow +these steps. + +This assumes that your local node has a configured, but disconnected +DRBD resource in the Primary role. That is to say, device +configuration is completed, identical `drbd.conf` copies exist on both +nodes, and you have issued the commands for +<> on your local node +-- but the remote node is not connected yet. + + +* On the local node, issue the following command: +------------------------------------- +# drbdadm new-current-uuid --clear-bitmap +------------------------------------- + +* Create a consistent, verbatim copy of the resource's data _and its + metadata_. You may do so, for example, by removing a hot-swappable + drive from a RAID-1 mirror. You would, of course, replace it with a + fresh drive, and rebuild the RAID set, to ensure continued + redundancy. But the removed drive is a verbatim copy that can now be + shipped off site. If your local block device supports snapshot + copies (such as when using DRBD on top of LVM), you may also create + a bitwise copy of that snapshot using `dd`. + + +* On the local node, issue: +------------------------------------- +# drbdadm new-current-uuid +------------------------------------- + +Note the absence of the `--clear-bitmap` option in this second +invocation. + +* Physically transport the copies to the remote peer location. + +* Add the copies to the remote node. This may again be a matter of + plugging a physical disk, or grafting a bitwise copy of your shipped + data onto existing storage on the remote node. Be sure to restore + or copy not only your replicated data, but also the associated DRBD + metadata. If you fail to do so, the disk shipping process is moot. + +* Bring up the resource on the remote node: +------------------------------------- +# drbdadm up +------------------------------------- + +After the two peers connect, they will not initiate a full device +synchronization. Instead, the automatic synchronization that now +commences only covers those blocks that changed since the invocation +of `drbdadm{nbsp}--clear-bitmap{nbsp}new-current-uuid`. + +Even if there were _no_ changes whatsoever since then, there may still +be a brief synchronization period due to areas covered by the +<> being rolled back on the new +Secondary. This may be mitigated by the use of +<>. + +You may use this same procedure regardless of whether the resource is +a regular DRBD resource, or a stacked resource. For stacked resources, +simply add the `-S` or `--stacked` option to `drbdadm`. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/drbd-users-guide.adoc drbd-doc-8.4~20220106/UG8.4/en/drbd-users-guide.adoc --- drbd-doc-8.4~20151102/UG8.4/en/drbd-users-guide.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/drbd-users-guide.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,60 @@ +// vim: set ft=asciidoc : +:doctype: article +:source-highlighter: bash +:listing-caption: Listing +:icons: font +:icon-set: fa +:toc: +:sectnums: +:title-logo-image: image:images/linbit-logo-2017.svg[top=-15,width='650',align='center'] + += The DRBD User's Guide + +include::about.adoc[] + +[[p-intro]] += Introduction to DRBD + +include::fundamentals.adoc[] +include::features.adoc[] + + +[[p-build-install-configure]] += Building, installing and configuring DRBD + +include::install-packages.adoc[] +// include::build-install-from-source.adoc[] +include::configure.adoc[] + +[[p-work]] += Working with DRBD +include::administration.adoc[] +include::troubleshooting.adoc[] + +[[p-apps]] += DRBD-enabled applications + +include::pacemaker.adoc[] +include::rhcs.adoc[] +include::lvm.adoc[] +include::gfs.adoc[] +include::ocfs2.adoc[] +include::xen.adoc[] + +[[p-performance]] += Optimizing DRBD performance + +include::benchmark.adoc[] +include::throughput.adoc[] +include::latency.adoc[] + +[[p-learn]] += Learning more about DRBD + +include::internals.adoc[] +include::more-info.adoc[] + +[[p-appendices]] += Appendices + +include::recent-changes.adoc[] diff -Nru drbd-doc-8.4~20151102/UG8.4/en/features.adoc drbd-doc-8.4~20220106/UG8.4/en/features.adoc --- drbd-doc-8.4~20151102/UG8.4/en/features.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/features.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,579 @@ +[[ch-features]] +== DRBD Features + +This chapter discusses various useful DRBD features, and gives some +background information about them. Some of these features will be +important to most users, some will only be relevant in very specific +deployment scenarios. <> and <> contain +instructions on how to enable and use these features in day-to-day +operation. + +[[s-single-primary-mode]] +=== Single-primary mode + +In single-primary mode, a <> is, at any given +time, in the primary role on only one cluster member. Since it is +guaranteed that only one cluster node manipulates the data at any +moment, this mode can be used with any conventional file system (ext3, +ext4, XFS etc.). + +Deploying DRBD in single-primary mode is the canonical approach for +high availability (fail-over capable) clusters. + +[[s-dual-primary-mode]] +=== Dual-primary mode + +In dual-primary mode, a resource is, at any given time, in the +primary role on both cluster nodes. Since concurrent access to the +data is thus possible, this mode requires the use of a shared cluster +file system that utilizes a distributed lock manager. Examples include +<> and <>. + +Deploying DRBD in dual-primary mode is the preferred approach for +load-balancing clusters which require concurrent data access from two +nodes. This mode is disabled by default, and must be enabled +explicitly in DRBD's configuration file. + +See <> for information on enabling dual-primary +mode for specific resources. + +[[s-replication-protocols]] +=== Replication modes + +DRBD supports three distinct replication modes, allowing three degrees +of replication synchronicity. + +[[fp-protocol-a]] +.Protocol A +Asynchronous replication protocol. Local write operations on the +primary node are considered completed as soon as the local disk write +has finished, and the replication packet has been placed in the local +TCP send buffer. In the event of forced fail-over, data loss may +occur. The data on the standby node is consistent after fail-over, +however, the most recent updates performed prior to the crash could be +lost. Protocol A is most often used in long distance replication scenarios. +When used in combination with DRBD Proxy it makes an effective +disaster recovery solution. See <> for more information. + + +[[fp-protocol-b]] +.Protocol B +Memory synchronous (semi-synchronous) replication protocol. Local +write operations on the primary node are considered completed as soon +as the local disk write has occurred, and the replication packet has +reached the peer node. Normally, no writes are lost in case of forced +fail-over. However, in the event of simultaneous power failure on both +nodes and concurrent, irreversible destruction of the primary's data +store, the most recent writes completed on the primary may be lost. + +[[fp-protocol-c]] +.Protocol C +Synchronous replication protocol. Local write operations on the +primary node are considered completed only after both the local and +the remote disk write have been confirmed. As a result, loss of a +single node is guaranteed not to lead to any data loss. Data loss is, +of course, inevitable even with this replication protocol if both +nodes (or their storage subsystems) are irreversibly destroyed at the +same time. + +By far, the most commonly used replication protocol in DRBD setups is +protocol C. + +The choice of replication protocol influences two factors of your +deployment: _protection_ and _latency_. _Throughput_, by contrast, is +largely independent of the replication protocol selected. + +See <> for an example resource configuration +which demonstrates replication protocol configuration. + +[[s-replication-transports]] +=== Multiple replication transports + +DRBD's replication and synchronization framework socket layer supports +multiple low-level transports: + +.TCP over IPv4 +This is the canonical implementation, and DRBD's default. It may be +used on any system that has IPv4 enabled. + +.TCP over IPv6 +When configured to use standard TCP sockets for replication and +synchronization, DRBD can use also IPv6 as its network protocol. This +is equivalent in semantics and performance to IPv4, albeit using a +different addressing scheme. + +.SDP +SDP is an implementation of BSD-style sockets for RDMA capable +transports such as InfiniBand. SDP is available as part of the OFED +stack for most current distributions. SDP uses and IPv4-style +addressing scheme. Employed over an InfiniBand interconnect, SDP +provides a high-throughput, low-latency replication network to DRBD. + +.SuperSockets +SuperSockets replace the TCP/IP portions of the stack with a single, +monolithic, highly efficient and RDMA capable socket +implementation. DRBD can use this socket type for very low latency +replication. SuperSockets must run on specific hardware which is +currently available from a single vendor, Dolphin Interconnect +Solutions. + +[[s-resync]] +=== Efficient synchronization + +(Re-)synchronization is distinct from device replication. While +replication occurs on any write event to a resource in the primary +role, synchronization is decoupled from incoming writes. Rather, it +affects the device as a whole. + +Synchronization is necessary if the replication link has been +interrupted for any reason, be it due to failure of the primary node, +failure of the secondary node, or interruption of the replication +link. Synchronization is efficient in the sense that DRBD does not +synchronize modified blocks in the order they were originally written, +but in linear order, which has the following consequences: + +* Synchronization is fast, since blocks in which several successive + write operations occurred are only synchronized once. + +* Synchronization is also associated with few disk seeks, as blocks + are synchronized according to the natural on-disk block layout. + +* During synchronization, the data set on the standby node is partly + obsolete and partly already updated. This state of data is called + _inconsistent_. + +The service continues to run uninterrupted on the active node, while +background synchronization is in progress. + +IMPORTANT: A node with inconsistent data generally cannot be put into +operation, thus it is desirable to keep the time period during which a +node is inconsistent as short as possible. DRBD does, however, ship +with an LVM integration facility that automates the creation of LVM +snapshots immediately before synchronization. This ensures that a +_consistent_ copy of the data is always available on the peer, even +while synchronization is running. See <> for details +on using this facility. + +[[s-variable-rate-sync]] +==== Variable-rate synchronization + +In variable-rate synchronization (the default), DRBD detects the +available bandwidth on the synchronization network, compares it to +incoming foreground application I/O, and selects an appropriate +synchronization rate based on a fully automatic control loop. + +See <> for configuration suggestions with +regard to variable-rate synchronization. + +[[s-fixed_rate_synchronization]] +==== Fixed-rate synchronization + +In fixed-rate synchronization, the amount of data shipped to the +synchronizing peer per second (the _synchronization rate_) has a +configurable, static upper limit. Based on this limit, you may +estimate the expected sync time based on the following simple formula: + +[[eq-resync-time]] +[equation] +.Synchronization time +image::images/resync-time.svg[] + +_t~sync~_ is the expected sync time. _D_ is the amount of data to be +synchronized, which you are unlikely to have any influence over (this +is the amount of data that was modified by your application while the +replication link was broken). _R_ is the rate of synchronization, +which is configurable -- bounded by the throughput limitations of the +replication network and I/O subsystem. + +See <> for configuration suggestions with +regard to fixed-rate synchronization. + +[[s-checksum-sync]] +==== Checksum-based synchronization + +[[p-checksum-sync]] +The efficiency of DRBD's synchronization algorithm may be further +enhanced by using data digests, also known as checksums. When using +checksum-based synchronization, then rather than performing a +brute-force overwrite of blocks marked out of sync, DRBD _reads_ +blocks before synchronizing them and computes a hash of the contents +currently found on disk. It then compares this hash with one computed +from the same sector on the peer, and omits re-writing this block if +the hashes match. This can dramatically cut down synchronization times +in situation where a filesystem re-writes a sector with identical +contents while DRBD is in disconnected mode. + +See <> for configuration suggestions with +regard to synchronization. + + +[[s-suspended-replication]] +=== Suspended replication + +If properly configured, DRBD can detect if the +replication network is congested, and _suspend_ replication in this +case. In this mode, the primary node "pulls ahead" of the secondary -- +temporarily going out of sync, but still leaving a consistent copy on +the secondary. When more bandwidth becomes available, replication +automatically resumes and a background synchronization takes place. + +Suspended replication is typically enabled over links with variable +bandwidth, such as wide area replication over shared connections +between data centers or cloud instances. + +See <> for details on congestion +policies and suspended replication. + +[[s-online-verify]] +=== On-line device verification + +On-line device verification enables users to do a block-by-block data +integrity check between nodes in a very efficient manner. + +Note that _efficient_ refers to efficient use of network bandwidth +here, and to the fact that verification does not break redundancy in +any way. On-line verification is still a resource-intensive operation, +with a noticeable impact on CPU utilization and load average. + +It works by one node (the _verification source_) sequentially +calculating a cryptographic digest of every block stored on the +lower-level storage device of a particular resource. DRBD then +transmits that digest to the peer node (the _verification target_), +where it is checked against a digest of the local copy of the affected +block. If the digests do not match, the block is marked out-of-sync +and may later be synchronized. Because DRBD transmits just the +digests, not the full blocks, on-line verification uses network +bandwidth very efficiently. + +The process is termed _on-line_ verification because it does not +require that the DRBD resource being verified is unused at the time of +verification. Thus, though it does carry a slight performance penalty +while it is running, on-line verification does not cause service +interruption or system down time -- neither during the +verification run nor during subsequent synchronization. + +It is a common use case to have on-line verification managed by the +local cron daemon, running it, for example, once a week or once a +month. See <> for information on how to enable, +invoke, and automate on-line verification. + +[[s-integrity-check]] +=== Replication traffic integrity checking + +DRBD optionally performs end-to-end message integrity checking using +cryptographic message digest algorithms such as MD5, SHA-1 or CRC-32C. + +These message digest algorithms are not _provided_ by DRBD. The Linux +kernel crypto API provides these; DRBD merely uses them. Thus, DRBD is +capable of utilizing any message digest algorithm available in a +particular system's kernel configuration. + +With this feature enabled, DRBD generates a message digest of every +data block it replicates to the peer, which the peer then uses to +verify the integrity of the replication packet. If the replicated +block can not be verified against the digest, the peer requests +retransmission. Thus, DRBD replication is protected against several +error sources, all of which, if unchecked, would potentially lead to +data corruption during the replication process: + +* Bitwise errors ("bit flips") occurring on data in transit between + main memory and the network interface on the sending node (which + goes undetected by TCP checksumming if it is offloaded to the + network card, as is common in recent implementations); + +* bit flips occurring on data in transit from the network interface to + main memory on the receiving node (the same considerations apply for + TCP checksum offloading); + +* any form of corruption due to a race conditions or bugs in network + interface firmware or drivers; + +* bit flips or random corruption injected by some reassembling network + component between nodes (if not using direct, back-to-back + connections). + +See <> for information on how to enable +replication traffic integrity checking. + +[[s-split-brain-notification-and-recovery]] +=== Split brain notification and automatic recovery + +Split brain is a situation where, due to temporary failure of all +network links between cluster nodes, and possibly due to intervention +by a cluster management software or human error, both nodes switched +to the primary role while disconnected. This is a potentially harmful +state, as it implies that modifications to the data might have been +made on either node, without having been replicated to the peer. Thus, +it is likely in this situation that two diverging sets of data have +been created, which cannot be trivially merged. + +DRBD split brain is distinct from cluster split brain, which is the +loss of all connectivity between hosts managed by a distributed +cluster management application such as Heartbeat. To avoid confusion, +this guide uses the following convention: + +* _Split brain_ refers to DRBD split brain as described in the + paragraph above. + +* Loss of all cluster connectivity is referred to as a _cluster + partition_, an alternative term for cluster split brain. + +DRBD allows for automatic operator notification (by email or other +means) when it detects split brain. See <> +for details on how to configure this feature. + +While the recommended course of action in this scenario is to +<> the split brain and then +eliminate its root cause, it may be desirable, in some cases, to +automate the process. DRBD has several resolution algorithms available +for doing so: + +* *Discarding modifications made on the younger primary.* In this + mode, when the network connection is re-established and split brain + is discovered, DRBD will discard modifications made, in the + meantime, on the node which switched to the primary role _last_. + +* *Discarding modifications made on the older primary.* In this mode, + DRBD will discard modifications made, in the meantime, on the node + which switched to the primary role _first_. + +* *Discarding modifications on the primary with fewer changes.* In + this mode, DRBD will check which of the two nodes has recorded fewer + modifications, and will then discard _all_ modifications made on + that host. + +* *Graceful recovery from split brain if one host has had no + intermediate changes.* In this mode, if one of the hosts has made no + modifications at all during split brain, DRBD will simply recover + gracefully and declare the split brain resolved. Note that this is a + fairly unlikely scenario. Even if both hosts only mounted the file + system on the DRBD block device (even read-only), the device + contents would be modified, ruling out the possibility of automatic + recovery. + +Whether or not automatic split brain recovery is acceptable depends +largely on the individual application. Consider the example of DRBD +hosting a database. The discard modifications from host with fewer +changes approach may be fine for a web application click-through +database. By contrast, it may be totally unacceptable to automatically +discard _any_ modifications made to a financial database, requiring +manual recovery in any split brain event. Consider your application's +requirements carefully before enabling automatic split brain recovery. + +Refer to <> for +details on configuring DRBD's automatic split brain recovery policies. + +[[s-disk-flush-support]] +=== Support for disk flushes + +When local block devices such as hard drives or RAID logical disks +have write caching enabled, writes to these devices are considered +completed as soon as they have reached the volatile cache. Controller +manufacturers typically refer to this as write-back mode, the opposite +being write-through. If a power outage occurs on a controller in +write-back mode, the last writes are never +committed to the disk, potentially causing data loss. + +To counteract this, DRBD makes use of disk flushes. A disk flush is a +write operation that completes only when the associated data has been +committed to stable (non-volatile) storage -- that is to say, it has +effectively been written to disk, rather than to the cache. DRBD uses +disk flushes for write operations both to its replicated data set and +to its meta data. In effect, DRBD circumvents the write cache in +situations it deems necessary, as in <> +updates or enforcement of implicit write-after-write +dependencies. This means additional reliability even in the face of +power failure. + +It is important to understand that DRBD can use disk flushes only when +layered on top of backing devices that support them. Most reasonably +recent kernels support disk flushes for most SCSI and SATA +devices. Linux software RAID (md) supports disk flushes for RAID-1 +provided that all component devices support them too. The same is true for +device-mapper devices (LVM2, dm-raid, multipath). + +Controllers with battery-backed write cache (BBWC) use a battery to +back up their volatile storage. On such devices, when power is +restored after an outage, the controller flushes all pending writes out +to disk from the battery-backed cache, ensuring that all +writes committed to the volatile cache are actually transferred to +stable storage. When running DRBD on top of such devices, it may be +acceptable to disable disk flushes, thereby improving DRBD's write +performance. See <> for details. + +[[s-handling-disk-errors]] +=== Disk error handling strategies + +If a hard drive fails which is used as a backing block device for DRBD on one +of the nodes, DRBD may either pass on the I/O error to the upper +layer (usually the file system) or it can mask I/O errors from upper +layers. + +[[fp-io-error-pass-on]] +.Passing on I/O errors +If DRBD is configured to pass on I/O errors, any such errors occurring +on the lower-level device are transparently passed to upper I/O +layers. Thus, it is left to upper layers to deal with such errors +(this may result in a file system being remounted read-only, for +example). This strategy does not ensure service continuity, and is +hence not recommended for most users. + +[[fp-io-error-detach]] +.Masking I/O errors +If DRBD is configured to _detach_ on lower-level I/O error, DRBD will +do so, automatically, upon occurrence of the first lower-level I/O +error. The I/O error is masked from upper layers while DRBD +transparently fetches the affected block from the peer node, over the +network. From then onwards, DRBD is said to operate in diskless mode, +and carries out all subsequent I/O operations, read and write, on the +peer node. Performance in this mode will be reduced, +but the service continues without interruption, and can be moved to +the peer node in a deliberate fashion at a convenient time. + +See <> for information on configuring +I/O error handling strategies for DRBD. + +[[s-outdate]] +=== Strategies for dealing with outdated data + +DRBD distinguishes between _inconsistent_ and _outdated_ +data. Inconsistent data is data that cannot be expected to be +accessible and useful in any manner. The prime example for this is +data on a node that is currently the target of an on-going +synchronization. Data on such a node is part obsolete, part up to +date, and impossible to identify as either. Thus, for example, if the +device holds a filesystem (as is commonly the case), that filesystem +would be unexpected to mount or even pass an automatic filesystem +check. + +Outdated data, by contrast, is data on a secondary node that is +consistent, but no longer in sync with the primary node. This would +occur in any interruption of the replication link, whether temporary +or permanent. Data on an outdated, disconnected secondary node is +expected to be clean, but it reflects a state of the peer node some +time past. In order to avoid services using outdated data, DRBD +disallows <> that +is in the outdated state. + +DRBD has interfaces that allow an external application to outdate a +secondary node as soon as a network interruption occurs. DRBD will +then refuse to switch the node to the primary role, preventing +applications from using the outdated data. A complete implementation +of this functionality exists for the <> (where it uses a communication channel separate +from the DRBD replication link). However, the interfaces are generic +and may be easily used by any other cluster management application. + +Whenever an outdated resource has its replication link re-established, +its outdated flag is automatically cleared. A <> then follows. + +See the section about <> for an example DRBD/Heartbeat/Pacemaker configuration +enabling protection against inadvertent use of outdated data. + +[[s-three-way-repl]] +=== Three-way replication + +NOTE: Available in DRBD version 8.3.0 and above + +When using three-way replication, DRBD adds a third node to an +existing 2-node cluster and replicates data to that node, where it can +be used for backup and disaster recovery purposes. This type of +configuration generally involves <>. + +Three-way replication works by adding another, _stacked_ DRBD resource +on top of the existing resource holding your production data, as seen +in this illustration: + +.DRBD resource stacking +image::images/drbd-resource-stacking.svg[] + +The stacked resource is replicated using asynchronous replication +(DRBD protocol A), whereas the production data would usually make use +of synchronous replication (DRBD protocol C). + +Three-way replication can be used permanently, where the third node is +continuously updated with data from the production +cluster. Alternatively, it may also be employed on demand, where the +production cluster is normally disconnected from the backup site, and +site-to-site synchronization is performed on a regular basis, for +example by running a nightly cron job. + +[[s-drbd-proxy]] +=== Long-distance replication with DRBD Proxy + +NOTE: DRBD Proxy requires DRBD version 8.2.7 or above. + +DRBD's <> is asynchronous, but the +writing application will block as soon as the socket output buffer is +full (see the sndbuf-size option in the man page of `drbd.conf`). In that +event, the writing application has to wait until some of the data written +runs off through a possibly small bandwidth network link. + +The average write bandwidth is limited by available bandwidth of the +network link. Write bursts can only be handled gracefully if they fit +into the limited socket output buffer. + +You can mitigate this by DRBD Proxy's buffering mechanism. DRBD Proxy +will place changed data from the DRBD device on the primary node into +its buffers. DRBD Proxy's buffer size is freely configurable, only +limited by the address room size and available physical RAM. + +Optionally DRBD Proxy can be configured to compress and decompress the +data it forwards. Compression and decompression of DRBD's data packets +might slightly increase latency. However, when the bandwidth of the network +link is the limiting factor, the gain in shortening transmit time +outweighs the compression and decompression overhead. + +Compression and decompression were implemented with multi core SMP +systems in mind, and can utilize multiple CPU cores. + +The fact that most block I/O data compresses very well and therefore +the effective bandwidth increases well justifies the use of the DRBD +Proxy even with DRBD protocols B and C. + +See <> for information on configuring DRBD Proxy. + +NOTE: DRBD Proxy is the only part of the DRBD product family that is +not published under an open source license. Please contact +sales@linbit.com or sales_us@linbit.com for an evaluation license. + +[[s-truck-based-replication]] +=== Truck based replication + +Truck based replication, also known as disk shipping, is a means of +preseeding a remote site with data to be replicated, by physically +shipping storage media to the remote site. This is particularly suited +for situations where + +* the total amount of data to be replicated is fairly + large (more than a few hundreds of gigabytes); + +* the expected rate of change of the data to be replicated is less + than enormous; + +* the available network bandwidth between sites is + limited. + +In such situations, without truck based replication, DRBD would +require a very long initial device synchronization (on the order of +days or weeks). Truck based replication allows us to ship a data seed +to the remote site, and drastically reduce the initial synchronization +time. See <> for details on this use +case. + +[[s-floating-peers]] +=== Floating peers + +NOTE: This feature is available in DRBD versions 8.3.2 and above. + +A somewhat special use case for DRBD is the _floating peers_ +configuration. In floating peer setups, DRBD peers are not tied to +specific named hosts (as in conventional configurations), but instead +have the ability to float between several hosts. In such a +configuration, DRBD identifies peers by IP address, rather than by +host name. + +For more information about managing floating peer configurations, see +<>. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/fundamentals.adoc drbd-doc-8.4~20220106/UG8.4/en/fundamentals.adoc --- drbd-doc-8.4~20151102/UG8.4/en/fundamentals.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/fundamentals.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,153 @@ +[[ch-fundamentals]] +== DRBD Fundamentals + +The Distributed Replicated Block Device (DRBD) is a software-based, +shared-nothing, replicated storage solution mirroring the content of +block devices (hard disks, partitions, logical volumes etc.) between +hosts. + +DRBD mirrors data + +* *in real time*. Replication occurs continuously while applications + modify the data on the device. + +* *transparently*. Applications need not be aware that the data is stored on + multiple hosts. + +* *synchronously* or *asynchronously*. With synchronous mirroring, applications + are notified of write completions after the writes have been carried out on + all hosts. With asynchronous mirroring, applications are notified of write + completions when the writes have completed locally, which usually is before + they have propagated to the other hosts. + + +[[s-kernel-module]] +=== Kernel module + +DRBD's core functionality is implemented by way of a Linux kernel +module. Specifically, DRBD constitutes a driver for a virtual block +device, so DRBD is situated right near the bottom of a system's I/O +stack. Because of this, DRBD is extremely flexible and versatile, +which makes it a replication solution suitable for adding high +availability to just about any application. + +DRBD is, by definition and as mandated by the Linux kernel +architecture, agnostic of the layers above it. Thus, it is impossible +for DRBD to miraculously add features to upper layers that these do +not possess. For example, DRBD cannot auto-detect file system +corruption or add active-active clustering capability to file systems +like ext3 or XFS. + +[[f-drbd-linux-io-stack]] +.DRBD's position within the Linux I/O stack +image::images/drbd-in-kernel.svg[] + +[[s-userland]] +=== User space administration tools === + +DRBD comes with a set of administration tools which communicate with the +kernel module in order to configure and administer DRBD resources. + +.`drbdadm` +The high-level administration tool of the DRBD program suite. Obtains all DRBD +configuration parameters from the configuration file `/etc/drbd.conf` and acts +as a front-end for `drbdsetup` and `drbdmeta`. `drbdadm` has a _dry-run_ mode, +invoked with the `-d` option, that shows which `drbdsetup` and `drbdmeta` calls +`drbdadm` would issue without actually calling those commands. + +.`drbdsetup` +Configures the DRBD module loaded into the kernel. All parameters to +`drbdsetup` must be passed on the command line. The separation between +`drbdadm` and `drbdsetup` allows for maximum flexibility. Most users will +rarely need to use `drbdsetup` directly, if at all. + +.`drbdmeta` +Allows to create, dump, restore, and modify DRBD meta data structures. Like +`drbdsetup`, most users will only rarely need to use `drbdmeta` directly. + +[[s-resources]] +=== Resources === + +In DRBD, _resource_ is the collective term that refers to all aspects of +a particular replicated data set. These include: + +.Resource name +This can be any arbitrary, US-ASCII name not containing whitespace by +which the resource is referred to. + +.Volumes +Any resource is a replication group consisting of one of more +_volumes_ that share a common replication stream. DRBD ensures write +fidelity across all volumes in the resource. Volumes are numbered +starting with `0`, and there may be up to 65,535 volumes in one +resource. A volume contains the replicated data set, and a set of +metadata for DRBD internal use. + +At the `drbdadm` level, a volume within a resource can be addressed by the +resource name and volume number as /. + +// At the `drbdsetup` level, a volume is addressed by its device minor number. +// At the `drbdmeta` level, a volume is addressed by the name of the underlying +// device. + +// FIXME: Users don't care which major device number is assigned to DRBD. +// Likewise, they don't care about minor device numbers if they don't have to. +// We refer to device as /dev/drbdX almost everywhere, so do we have to mention +// minors here at all? + +.DRBD device +This is a virtual block device managed by DRBD. It has a device major +number of 147, and its minor numbers are numbered from 0 onwards, as +is customary. Each DRBD device corresponds to a volume in a +resource. The associated block device is usually named +`/dev/drbdX`, where `X` is the device minor number. DRBD also allows +for user-defined block device names which must, however, start with +`drbd_`. + +NOTE: Very early DRBD versions hijacked NBD's device major number 43. +This is long obsolete; 147 is the +http://www.lanana.org/docs/device-list/[LANANA-registered] DRBD device +major. + +.Connection +A _connection_ is a communication link between two hosts that share a +replicated data set. As of the time of this writing, each resource involves +only two hosts and exactly one connection between these hosts, so for the most +part, the terms `resource` and `connection` can be used interchangeably. + +At the `drbdadm` level, a connection is addressed by the resource name. + +// At the `drbdsetup` level, a connection is addressed by its two replication +// endpoints identified by address family (optional), address (required), and +// port (optional). + +[[s-resource-roles]] +=== Resource roles === + +In DRBD, every <> has a role, which may be +_Primary_ or _Secondary_. + +NOTE: The choice of terms here is not arbitrary. These roles were +deliberately not named "Active" and "Passive" by DRBD's +creators. Primary vs. secondary refers to a concept related to +availability of _storage_, whereas active vs. passive refers to the +availability of an _application_. It is usually the case in a +high-availability environment that the primary node is also the active +one, but this is by no means necessary. + +* A DRBD device in the primary role can be used unrestrictedly for + read and write operations. It may be used for creating and mounting + file systems, raw or direct I/O to the block device, etc. + +* A DRBD device in the secondary role receives all updates from the + peer node's device, but otherwise disallows access completely. It + can not be used by applications, neither for read nor write + access. The reason for disallowing even read-only access to the + device is the necessity to maintain cache coherency, which would be + impossible if a secondary resource were made accessible in any way. + +The resource's role can, of course, be changed, either by +<> or by way of some +automated algorithm by a cluster management application. Changing the +resource role from secondary to primary is referred to as _promotion_, +whereas the reverse operation is termed _demotion_. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/gfs.adoc drbd-doc-8.4~20220106/UG8.4/en/gfs.adoc --- drbd-doc-8.4~20151102/UG8.4/en/gfs.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/gfs.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,270 @@ +[[ch-gfs]] +== Using GFS2 with DRBD + +indexterm:[GFS]indexterm:[Global File System]This chapter outlines the +steps necessary to set up a DRBD resource as a block device holding a +shared Global File System (GFS) version 2 in a nutshell. + +For a more detailed howto please consult our tech-guide on http://www.linbit.com/en/downloads/tech-guides[GFS in dual-primary setups]. + +[WARNING] +=============================== +This guide describes a dual-primary setup with DRBD. Dual-primary setups *can easily destroy data* if not configured properly! + + +Please always read our tech-guide http://www.linbit.com/en/downloads/tech-guides?download=15:dual-primary-think-twice["Dual primary: think twice"], in advance, if you are planning +to configure a DRBD dual-primary resource. + + +If you are not clear or uncertain of anything within this document you may want to consult with +the friendly experts at LINBIT beforehand. +=============================== + +[[s-gfs-primer]] +=== GFS primer + +The Red Hat Global File System (GFS) is Red Hat's implementation of a +concurrent-access shared storage file system. As any such filesystem, +GFS allows multiple nodes to access the same storage device, in +read/write fashion, simultaneously without risking data corruption. It +does so by using a Distributed Lock Manager (DLM) which manages +concurrent access from cluster members. + +GFS was designed, from the outset, for use with conventional shared +storage devices. Regardless, it is perfectly possible to use DRBD, in +dual-primary mode, as a replicated storage device for +GFS. Applications may benefit from reduced read/write latency due to +the fact that DRBD normally reads from and writes to local storage, as +opposed to the SAN devices GFS is normally configured to run +from. Also, of course, DRBD adds an additional physical copy to every +GFS filesystem, thus adding redundancy to the concept. + +GFS file systems are usually tightly integrated with Red Hat's own +cluster management framework, the indexterm:[Red Hat Cluster +Suite]<>. This chapter explains +the use of DRBD in conjunction with GFS in the Red Hat Cluster context. +Additionally the connection to the Pacemaker cluster manager is explained, which will take care of resource management und STONITH. + +GFS, Pacemaker and Red Hat Cluster are available in Red Hat +Enterprise Linux (RHEL) and distributions derived from it, such as +indexterm:[CentOS]CentOS. Packages built from the same sources are +also available in indexterm:[Debian GNU/Linux]Debian GNU/Linux. This +chapter assumes running GFS on a Red Hat Enterprise Linux system. + +[[s-gfs-create-resource]] +=== Creating a DRBD resource suitable for GFS2 + +Since GFS is a shared cluster file system expecting concurrent +read/write storage access from all cluster nodes, any DRBD resource to +be used for storing a GFS filesystem must be configured in +<>. Also, it is recommended to +use some of DRBD's +<>. Promoting the resource on both nodes and starting the GFS filesystem will be handled by Pacemaker. +To prepare your DRBD resource, include the following lines in the resource +configuration: indexterm:[drbd.conf] + +[source,drbd] +---------------------------- +resource { + net { + allow-two-primaries; + after-sb-0pri discard-zero-changes; + after-sb-1pri discard-secondary; + after-sb-2pri disconnect; + ... + } + ... +} +---------------------------- + +[WARNING] +=============================== +By configuring auto-recovery policies, you are configuring effectively configuring automatic data-loss! Be sure you understand the implications. +=============================== + + +Once you have added these options to <>, you may <>. Since the +indexterm:[drbd.conf]`allow-two-primaries` option is set to `yes` for +this resource, you will be able to <> to the primary role on both nodes. + +[IMPORTANT] +=============================== +*Again*: Be aware to configure fencing/STONITH and test the setup extensively to cover all possible use cases, especially in dual-primary setups, *before* going into production. +=============================== + +[[s-enable_resource_fencing_for_dual_primary_resource]] +==== Enable resource fencing for dual-primary resource + +In order to enable Resource fencing in DRBD you will need the sections indexterm:[drbd.conf] + +[source,drbd] +---------------------------- + disk { + fencing resource-and-stonith; + } + + handlers { + fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; + after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; + } +---------------------------- + +in your DRBD configuration. These scripts should come with your DRBD installation. + +[WARNING] +=============================== +Don't be misled by the shortness of the section <> in the DRBD users +guide - with all dual primary setups you have to have fencing in your cluster. See +chapters http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/s1-config-fence-devices-ccs-CA.html[5.5. Configuring Fence Devices] +and http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/s1-config-member-ccs-CA.html[5.6. Configuring Fencing for Cluster Members] +in the Red Hat Cluster documentation for more details. +=============================== + +[[s-gfs-configure-cman]] +=== Configuring CMAN + +GFS needs `cman`, the Red Hat cluster manager, to work. Since `cman` is not as flexible and +easy to configure we will put `pacemaker` on top of it in the next steps. + +[NOTE] +=============================== +If you don't want to use `pacemaker`, please consult the corresponding +https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/ch-config-cli-CA.html[manuals] +for `cman`. +=============================== + +Before we start making a GFS filesystem we will configure `cman`. + +[NOTE] +=============================== +If you are configuring a two node cluster, you can not expect it to have a quorum. You will need tell cman to ignore it. +This is done by setting + + # sed -i.orig "s/.*CMAN_QUORUM_TIMEOUT=.*/CMAN_QUORUM_TIMEOUT=0/g" /etc/sysconfig/cman + +=============================== + +Next create a `cman` cluster configuration in `/etc/cluster/cluster.conf`: + +[source,drbd] +---------------------------- + + + + + + + + + + + + + + + + + + + + + + + +---------------------------- + +This tells `cman` that the clustername is `my-cluster`, the cluster node names are `gfs-machine1` and +`gfs-machine2`, and that fencing will be done by `pacemaker`. + +After you have made the configuration start `cman`. + +[[s-gfs-create]] +=== Creating a GFS2 filesystem + +In order to create a GFS filesystem on your dual-primary DRBD +resource, issue this command on (only) *one* (!) node (which must be _Primary_): + +indexterm:[GFS] +---------------------------- +mkfs -t gfs2 -p lock_dlm -j 2 -t : /dev/ +---------------------------- + +The `-j` option in this command refers to the number of journals to +keep for GFS. This must be identical to the number of nodes in the GFS +cluster; since DRBD does not support more than two nodes, the value to +set here is always 2. + +[TIP] +=============================== +With DRBD 9 it is possible to share the same disk among more than two nodes; +if you want to do that, you’ll either have to specify a higher number of journals or +create the journals in the live file system. +=============================== + +The `-t` option, defines the lock +table name. This follows the format _:_, where __ +must match your cluster name as defined in +`/etc/cluster/cluster.conf`. Thus, only members of that cluster will +be permitted to use the filesystem. By contrast, __ is an +arbitrary file system name unique in the cluster. + +// this is dangerous -> NO FENCING ENABLED// +//[[s-gfs-use]] +//=== Using your GFS2 filesystem without cluster manager +// +//After you have created your filesystem, you may add it to +//+/etc/fstab+: +// +//[source,fstab] +//---------------------------- +// /dev/ gfs2 defaults 0 0 +//---------------------------- +// +//Do not forget to make this change on both cluster nodes. +// +//After this, you may mount your new filesystem by starting the +//+gfs+ service (on both nodes): indexterm:[GFS] +// +//---------------------------- +//service gfs start +//---------------------------- +// +//From then onwards, as long as you have DRBD configured to start +//automatically on system startup, before the RHCS services and the +//+gfs+ service, you will be able to use this GFS file system as you +//would use one that is configured on traditional shared storage. + +[[s-gfs-with-pacemaker]] +=== Using your GFS2 filesystem with Pacemaker + +If you want to use Pacemaker as the cluster resource manager, you will have to set up your current +configuration and tell Pacemaker to manage your resources. + +[IMPORTANT] +=============================== +Make sure to configure Pacemaker also to take care of all the fencing/STONITH actions +(see our tech-guide on https://www.linbit.com/en/resources/technical-publications/[GFS in dual-primary setups] +for further details). +=============================== + +For Pacemaker configuration make a setup as described in +<>. + +Since it is a dual-primary setup consider the following changes to the Master-Slave set: + +---------------------------- +crm(live)configure# ms ms_drbd_xyz drbd_xyz \ + meta master-max="2" master-node-max="1" \ + clone-max="2" clone-node-max="1" \ + notify="true" +---------------------------- + +Notice that `master-max` is set to *2*, which will cause the DRBD resource to be promoted on both cluster nodes. + +Furthermore we want the GFS filesystem also to be started on both nodes, so we simply add a clone of the filesystem primitive: + +---------------------------- +crm(live)configure# clone cl_fs_xyz p_fs_xyz meta interleave="true" +---------------------------- diff -Nru drbd-doc-8.4~20151102/UG8.4/en/heartbeat.adoc drbd-doc-8.4~20220106/UG8.4/en/heartbeat.adoc --- drbd-doc-8.4~20151102/UG8.4/en/heartbeat.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/heartbeat.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,1013 @@ +[[ch-heartbeat]] +== Integrating DRBD with Heartbeat clusters + +indexterm:[Heartbeat] + +IMPORTANT: This chapter talks about DRBD in combination with the +legacy Linux-HA cluster manager found in Heartbeat 2.0 and 2.1. That +cluster manager has been superseded by Pacemaker and the latter should +be used whenever possible — please see <>for more +information. This chapter outlines legacy Heartbeat configurations and +is intended for users who must maintain existing legacy Heartbeat +systems for policy reasons. + +The Heartbeat _cluster messaging layer_, a distinct part of the +Linux-HA project that continues to be supported as of Heartbeat +version 3, is fine to use in conjunction with the Pacemaker cluster +manager. More information about configuring Heartbeat can be found as +part of the Linux-HA User's Guide at http://www.linux-ha.org/doc/[]. + + + +[[s-heartbeat-primer]] +=== Heartbeat primer + +[[s-heartbeat-cluster-manager]] +==== The Heartbeat cluster manager + +indexterm:[Heartbeat]Heartbeat's purpose as a cluster manager is to +ensure that the cluster maintains its services to the clients, even if +single machines of the cluster fail. Applications that may be managed +by Heartbeat as cluster services include, for example, + +* a web server such as Apache, +* a database server such as MySQL, Oracle, or PostgreSQL, +* a file server such as NFS or Samba, and many others. + +In essence, any server application may be managed by Heartbeat as a +cluster service. + +Services managed by Heartbeat are typically removed from the system +startup configuration; rather than being started at boot time, the +cluster manager starts and stops them as required by the cluster +configuration and status. If a machine (a physical cluster node) fails +while running a particular set of services, Heartbeat will start the +failed services on another machine in the cluster. These operations +performed by Heartbeat are commonly referred to as (automatic) +indexterm:[fail-over]_fail-over_. + +A migration of cluster services from one cluster node to another, by +manual intervention, is commonly termed "manual fail-over". This being +a slightly self-contradictory term, we use the alternative term +indexterm:[switch-over]indexterm:[fail-over]_switch-over_ for the +purposes of this guide. + +Heartbeat is also capable of automatically migrating resources back to +a previously failed node, as soon as the latter recovers. This process +is called indexterm:[fail-back]_fail-back_. + +[[s-heartbeat-resources]] +==== Heartbeat resources + +indexterm:[Heartbeat]indexterm:[resource (Heartbeat)]Usually, there +will be certain requirements in order to be able to start a cluster +service managed by Heartbeat on a node. Consider the example of a +typical database-driven web application: + +* Both the web server and the database server assume that their + designated _IP addresses_ are available (i.e. configured) on the + node. +* The database will require a _file system_ to retrieve data files + from. +* That file system will require its underlying _block device_ to read + from and write to (this is where DRBD comes in, as we will see + later). +* The web server will also depend on the database being started, + assuming it cannot serve dynamic content without an available + database. + +The services Heartbeat controls, and any additional requirements those +services depend on, are referred to as _resources_ in Heartbeat +terminology. Where resources form a co-dependent collection, that +collection is called a _resource group_. + +[[s-resource-agents]] +==== Heartbeat resource agents + +indexterm:[Heartbeat]indexterm:[resource agent (Heartbeat)]Heartbeat +manages resources by way of invoking standardized shell scripts known +as _resource agents_ (RA's). In Heartbeat clusters, the following +resource agent types are available: + +[[fp-heartbeat-ra]] +.Heartbeat resource agents +These agents are found in the `/etc/ha.d/resource.d` directory. They +may take zero or more positional, unnamed parameters, and one +operation argument ( `start`, `stop`, or `status`). Heartbeat +translates resource parameters it finds for a matching resource in +`/etc/ha.d/haresources` into positional parameters for the RA, which +then uses these to configure the resource. + +[[fp-lsb-ra]] +.LSB resource agents +These are conventional, Linux Standard Base-compliant init scripts +found in `/etc/init.d`, which Heartbeat simply invokes with the +`start`, `stop`, or `status` argument. They take no positional +parameters. Thus, the corresponding resources' configuration cannot be +managed by Heartbeat; these services are expected to be configured by +conventional configuration files. + +[[fp-ocf-ra]] +.OCF resource agents +These are resource agents that conform to the guidelines of the Open +Cluster Framework, and they _only_ work with clusters in CRM mode. They +are usually found in either `/usr/lib/ocf/resource.d/heartbeat` or +`/usr/lib64/ocf/resource.d/heartbeat`, depending on system +architecture and distribution. They take no positional parameters, but +may be extensively configured via environment variables that the +cluster management process derives from the cluster configuration, and +passes in to the resource agent upon invocation. + + +[[s-heartbeat-communication-channels]] +==== Heartbeat communication channels + +indexterm:[Heartbeat]indexterm:[communication channels +(Heartbeat)]Heartbeat uses a UDP-based communication protocol to +periodically check for node availability (the "heartbeat" proper). For +this purpose, Heartbeat can use several communication methods, +including: + +* IP multicast, +* IP broadcast, +* IP unicast, +* serial line. + +Of these, IP multicast and IP broadcast are the most relevant in +practice. The absolute minimum requirement for stable cluster +operation is two independent communication channels. + +IMPORTANT: A bonded network interface (a virtual aggregation of +physical interfaces using the indexterm:[bonding +driver]`bonding` driver) constitutes _one_ Heartbeat communication +channel. + +Bonded links are not protected against bugs, known or as-yet-unknown, +in the `bonding` driver. Also, bonded links are typically formed using +identical network interface models, thus they are vulnerable to bugs +in the NIC driver as well. Any such issue could lead to a cluster +partition if no independent second Heartbeat communication channel +were available. + +It is thus _not_ acceptable to omit the inclusion of a second +Heartbeat link in the cluster configuration just because the first +uses a bonded interface. + + +[[s-heartbeat-config]] +=== Heartbeat configuration + +indexterm:[Heartbeat]For any Heartbeat cluster, the following +configuration files must be available: + +* indexterm:[ha.cf (Heartbeat configuration file)]`/etc/ha.d/ha.cf` -- + global cluster configuration. + +* indexterm:[authkeys (Heartbeat configuration + file)]`/etc/ha.d/authkeys` -- keys for mutual node authentication. + +Depending on whether Heartbeat is running in R1-compatible or in CRM +mode, additional configuration files are required. These are covered +in <> and <>. + +[[s-heartbeat-hacf]] +==== The `ha.cf` file + +indexterm:[ha.cf (Heartbeat configuration file)]The following example +is a small and simple `ha.cf` file: + +[source,drbd] +---------------------------- +autojoin none +mcast bond0 239.0.0.43 694 1 0 +bcast eth2 +warntime 5 +deadtime 15 +initdead 60 +keepalive 2 +node alice +node bob +---------------------------- + +Setting `autojoin` to `none` disables cluster node auto-discovery and +requires that cluster nodes be listed explicitly, using the +`node` options. This speeds up cluster start-up in clusters with a +fixed number of nodes (which is always the case in R1-style Heartbeat +clusters). + +This example assumes that `bond0` is the cluster's interface to the +shared network, and that `eth2` is the interface dedicated for DRBD +replication between both nodes. Thus, `bond0` can be used for +Multicast heartbeat, whereas on `eth2` broadcast is acceptable as +`eth2` is not a shared network. + +The next options configure node failure detection. They set the time +after which Heartbeat issues a warning that a no longer available peer +node _may_ be dead ( `warntime`), the time after which Heartbeat +considers a node _confirmed_ dead ( `deadtime`), and the maximum time +it waits for other nodes to check in at cluster startup ( +`initdead`). `keepalive` sets the interval at which Heartbeat +keep-alive packets are sent. All these options are given in seconds. + +The `node` option identifies cluster members. The option values listed +here must match the exact host names of cluster nodes as given by +`uname -n`. + +Not adding a `crm` option implies that the cluster is operating in +<> with CRM disabled. If `crm yes` +were included in the configuration, Heartbeat would be running in +<>. + +[[s-heartbeat-authkeys]] +==== The `authkeys` file + +indexterm:[authkeys (Heartbeat configuration +file)]`/etc/ha.d/authkeys` contains pre-shared secrets used for mutual +cluster node authentication. It should only be readable by `root` and +follows this format: + +[source,drbd] +---------------------------- +auth + +---------------------------- + + +__ is a simple key index, starting with 1. Usually, you will only +have one key in your `authkeys` file. + +__ is the signature algorithm being used. You may use either +`md5` or `sha1`; the use of `crc` (a simple cyclic redundancy check, +not secure) is not recommended. + +__ is the actual authentication key. + +You may create an `authkeys` file, using a generated secret, with the +following shell hack: + +[source,bash] +---------------------------- +( echo -ne "auth 1\n1 sha1 "; \ + dd if=/dev/urandom bs=512 count=1 | openssl md5 ) \ + > /etc/ha.d/authkeys +chmod 0600 /etc/ha.d/authkeys +---------------------------- + +[[s-heartbeat-ha-propagate]] +==== Propagating the cluster configuration to cluster nodes + +In order to propagate the contents of the `ha.cf` and `authkeys` +configuration files, you may use the `ha_propagate` command, which you +would invoke using either + +---------------------------- +/usr/lib/heartbeat/ha_propagate +---------------------------- + +or + +---------------------------- +/usr/lib64/heartbeat/ha_propagate +---------------------------- + + +This utility will copy the configuration files over to any `node` +listed in `/etc/ha.d/ha.cf` using `scp`. It will afterwards also +connect to the nodes using `ssh` and issue `chkconfig heartbeat on` in +order to enable Heartbeat services on system startup. + +[[s-heartbeat-r1]] +=== Using DRBD in Heartbeat R1-style clusters + +Running Heartbeat clusters in release 1 compatible configuration is +now considered obsolete by the Linux-HA development team. However, it +is still widely used in the field, which is why it is documented here +in this section. + +[[fp-heartbeat-r1-advantages]] +.Advantages + +Configuring Heartbeat in R1 compatible mode has some advantages over +using CRM configuration. In particular, + +* Heartbeat R1 compatible clusters are simple and easy to configure; +* it is fairly straightforward to extend Heartbeat's functionality + with custom, R1-style resource agents. + +[[fp-heartbeat-r1-disadvantages]] +.Disadvantages + +Disadvantages of R1 compatible configuration, as opposed to CRM +configurations, include: + +* Cluster configuration must be manually kept in sync between cluster + nodes, it is not propagated automatically. +* While node monitoring is available, resource-level monitoring is + not. Individual resources must be monitored by an external + monitoring system. +* Resource group support is limited to two resource groups. CRM + clusters, by contrast, support any number, and also come with a + complex resource-level constraint framework. + +Another disadvantage, namely the fact that R1 style configuration +limits cluster size to 2 nodes (whereas CRM clusters support up to +255) is largely irrelevant for setups involving DRBD, DRBD itself +being limited to two nodes. + + +[[s-heartbeat-r1-config]] +==== Heartbeat R1-style configuration + +In R1-style clusters, Heartbeat keeps its complete configuration in +three simple configuration files: + +* `/etc/ha.d/ha.cf`, as described in <>. +* `/etc/ha.d/authkeys`, as described in <>. +* `/etc/ha.d/haresources` -- the resource configuration file, + described below. + +[[s-heartbeat-haresources]] +===== The `haresources` file + +indexterm:[haresources (Heartbeat configuration file)]The following is +an example of a Heartbeat R1-compatible resource configuration +involving a MySQL database backed by DRBD: + +[source,drbd] +---------------------------- +bob drbddisk::mysql Filesystem::/dev/drbd0::/var/lib/mysql::ext3 \ + 10.9.42.1 mysql +---------------------------- + + +This resource configuration contains one resource group whose _home +node_ (the node where its resources are expected to run under normal +circumstances) is named 'bob'. Consequently, this resource group +would be considered the _local_ resource group on host 'bob', whereas +it would be the _foreign_ resource group on its peer host. + +The resource group includes a DRBD resource named `mysql`, which will +be promoted to the primary role by the cluster manager (specifically, +the `drbddisk` <>) on whichever node +is currently the active node. Of course, a corresponding resource must +exist and be configured in `/etc/drbd.conf` for this to work. + +That DRBD resource translates to the block device named `/dev/drbd0`, +which contains an ext3 filesystem that is to be mounted at +`/var/lib/mysql` (the default location for MySQL data files). + +The resource group also contains a service IP address, +10.9.42.1. Heartbeat will make sure that this IP address is configured +and available on whichever node is currently active. + +Finally, Heartbeat will use the <> named +`mysql` in order to start the MySQL daemon, which will then find its +data files at `/var/lib/mysql` and be able to listen on the service IP +address, 192.168.42.1. + +It is important to understand that the resources listed in the +`haresources` file are always evaluated from left to right when +resources are being started, and from right to left when they are +being stopped. + +[[s-heartbeat-stacked]] +===== Stacked resources in Heartbeat R1-style configurations + +In <> with stacked resources, +it is usually desirable to have the stacked resource managed by +Heartbeat just as other cluster resources. Then, your two-node cluster +will manage the stacked resource as a floating resource that runs on +whichever node is currently the active one in the cluster. The third +node, which is set aside from the Heartbeat cluster, will have the +"other half" of the stacked resource available permanently. + +NOTE: To have a stacked resource managed by Heartbeat, you must first +configure it as outlined in <>. + +The stacked resource is managed by Heartbeat by way of the `drbdupper` +resource agent. That resource agent is distributed, as all other +Heartbeat R1 resource agents, in `/etc/ha.d/resource.d`. It is to +stacked resources what the `drbddisk` resource agent is to +conventional, unstacked resources. + +`drbdupper` takes care of managing both the lower-level resource +_and_ the stacked resource. Consider the following `haresources` +example, which would replace the one given in the previous section: + +[source,drbd] +---------------------------- +bob 192.168.42.1 \ + drbdupper::mysql-U Filesystem::/dev/drbd1::/var/lib/mysql::ext3 \ + mysql +---------------------------- + +Note the following differences to the earlier example: + +* You start the cluster IP address _before_ all other resources. This + is necessary because stacked resource replication uses a connection + from the cluster IP address to the node IP address of the third + node. Lower-level resource replication, by contrast, uses a + connection between the "physical" node IP addresses of the two cluster nodes. + +* You pass the stacked resource name to `drbdupper` (in this example, + `mysql-U`). + +* You configure the `Filesystem` resource agent to mount the DRBD + device associated with the stacked resource (in this example, + `/dev/drbd1`), not the lower-level one. + +[[s-heartbeat-r1-manage]] +==== Managing Heartbeat R1-style clusters + +[[s-heartbeat-r1-assume-resources]] +===== Assuming control of cluster resources + +A Heartbeat R1-style cluster node may assume control of cluster +resources in the following way: + +[[fp-heartbeat-r1-manual-resource-takeover]] +.Manual resource takeover +This is the approach normally taken if one simply wishes to test +resource migration, or assume control of resources for any reason +other than the peer having to leave the cluster. This operation is +performed using the following command: + +---------------------------- +`/usr/lib/heartbeat/hb_takeover` +---------------------------- + +On some distributions and architectures, you may be required to enter: + +---------------------------- +`/usr/lib64/heartbeat/hb_takeover` +---------------------------- + + +[[s-heartbeat-r1-relinquish-resources]] +===== Relinquishing cluster resources + +A Heartbeat R1-style cluster node may be forced to give up its +resources in several ways. + +.Switching a cluster node to standby mode +This is the approach normally taken if one simply wishes to test +resource migration, or perform some other activity that does not +require the node to leave the cluster. This operation is performed +using the following command: +---------------------------- +/usr/lib/heartbeat/hb_standby +---------------------------- +On some distributions and architectures, you may be required to enter: +---------------------------- +/usr/lib64/heartbeat/hb_standby +---------------------------- + +[[fp-heartbeat-r1-shutdown-local-cluster-manager]] +.Shutting down the local cluster manager instance + +This approach is suited for local maintenance operations such as +software updates which require that the node be temporarily removed +from the cluster, but which do not necessitate a system reboot. It +involves shutting down all processes associated with the local cluster +manager instance: +---------------------------- +/etc/init.d/heartbeat stop +---------------------------- + +Prior to stopping its services, Heartbeat will gracefully migrate any +currently running resources to the peer node. This is the approach to +be followed, for example, if you are upgrading DRBD to a new release, +without also upgrading your kernel. + +[[fp-heartbeat-r1-shutdown-local-node]] +.Shutting down the local node +For hardware maintenance or other interventions that require a system +shutdown or reboot, use a simple graceful shutdown command, such as + +---------------------------- +reboot +---------------------------- +or +---------------------------- +poweroff +---------------------------- + +Since Heartbeat services will be shut down gracefully in the process +of a normal system shutdown, the previous paragraph applies to this +situation, too. This is also the approach you would use in case of a +kernel upgrade (which also requires the installation of a matching +DRBD version). + + +[[s-heartbeat-crm]] +=== Using DRBD in Heartbeat CRM-enabled clusters + +Running Heartbeat clusters in CRM configuration mode is the +recommended approach as of Heartbeat release 2 (per the Linux-HA +development team). + +[[fp-heartbeat-crm-advantages]] +.Advantages +Advantages of using CRM configuration mode, as opposed to R1 +compatible configuration, include: + +* Cluster configuration is distributed cluster-wide and automatically, + by the Cluster Resource Manager. It needs not to be propagated manually. + +* CRM mode supports both node-level and resource-level monitoring, and + configurable responses to both node and resource failure. It is + still advisable to also monitor cluster resources using an external + monitoring system. + +* CRM clusters support any number of resource groups, as opposed to + Heartbeat R1-style clusters which only support two. + +* CRM clusters support a powerful (if complex) constraints + framework. This enables you to ensure correct resource startup and + shutdown order, resource co-location (forcing resources to always + run on the same physical node), and to set preferred nodes for + particular resources. + +Another advantage, namely the fact that CRM clusters support up to 255 +nodes in a single cluster, is somewhat irrelevant for setups involving +DRBD (DRBD itself being limited to two nodes). + +[[fp-heartbeat-crm-disadvantages]] +.Disadvantages +Configuring Heartbeat in CRM mode also has some disadvantages in +comparison to using R1-compatible configuration. In particular, + +* Heartbeat CRM clusters are comparatively complex to configure and + administer; +* Extending Heartbeat's functionality with custom OCF resource agents +is non-trivial. + +NOTE: This disadvantage is somewhat mitigated by the fact that you do +have the option of using custom (or legacy) R1-style resource agents +in CRM clusters. + + +[[s-heartbeat-crm-config]] +==== Heartbeat CRM configuration + +In CRM clusters, Heartbeat keeps part of configuration in the +following configuration files: + +* indexterm:[ha.cf (Heartbeat configuration file)]`/etc/ha.d/ha.cf`, +as described in <>. You must include the following +line in this configuration file to enable CRM mode: +[source,drbd] +---------------------------- +crm yes +---------------------------- + +* indexterm:[authkeys (Heartbeat configuration + file)]`/etc/ha.d/authkeys`. The contents of this file are the same + as for R1 style clusters. See <> for details. + +The remainder of the cluster configuration is maintained in the +_Cluster Information Base_ (CIB), covered in detail in +<>. Contrary to the two +relevant configuration files, the CIB need not be manually distributed +among cluster nodes; the Heartbeat services take care of that +automatically. + +[[s-heartbeat-cib]] +===== The Cluster Information Base + +indexterm:[Heartbeat]indexterm:[Cluster Information Base (CIB)] The +Cluster Information Base (CIB) is kept in one XML file, +indexterm:[cib.xml (Heartbeat configuration +file)]`/var/lib/heartbeat/crm/cib.xml`. It is, however, not +recommended to edit the contents of this file directly, except in the +case of creating a new cluster configuration from scratch. Instead, +Heartbeat comes with both command-line applications and a GUI to +modify the CIB. + +The CIB actually contains both the cluster _configuration_ (which is +persistent and is kept in the `cib.xml` file), and information about +the current cluster _status_ (which is volatile). Status information, +too, may be queried either using Heartbeat command-line tools, and the +Heartbeat GUI. + +After creating a new Heartbeat CRM cluster -- that is, creating the +`ha.cf` and `authkeys` files, distributing them among cluster nodes, +starting Heartbeat services, and waiting for nodes to establish +intra-cluster communications -- new, empty CIB is created +automatically. Its contents will be similar to this: + +[source,xml] +---------------------------- + + + + + + + + + + + + + + + +---------------------------- + +The exact format and contents of this file are documented at length +http://www.linux-ha.org/ClusterInformationBase/UserGuide[on the +Linux-HA web site], but for practical purposes it is important to +understand that this cluster has two nodes named 'alice' and 'bob', and +that neither any resources nor any resource constraints have been +configured at this point. + +[[s-heartbeat-crm-drbd-backed-service]] +===== Adding a DRBD-backed service to the cluster configuration + +This section explains how to enable a DRBD-backed service in a +Heartbeat CRM cluster. The examples used in this section mimic, in +functionality, those described in <>, dealing +with R1-style Heartbeat clusters. + +The complexity of the configuration steps described in this section +may seem overwhelming to some, particularly those having previously +dealt only with R1-style Heartbeat configurations. While the +configuration of Heartbeat CRM clusters is indeed complex (and +sometimes not very user-friendly), <> may outweigh <>. Which approach to follow is entirely up to the +administrator's discretion. + +[[s-heartbeat-crm-drbddisk-ra]] +====== Using the `drbddisk` resource agent in a Heartbeat CRM configuration + +Even though you are using Heartbeat in CRM mode, you may still utilize +R1-compatible resource agents such as `drbddisk`. This resource agent +provides no secondary node monitoring, and ensures only resource +promotion and demotion. + +In order to enable a DRBD-backed configuration for a MySQL database in +a Heartbeat CRM cluster with `drbddisk`, you would use a configuration +like this: + +[source,xml] +---------------------------- + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +---------------------------- + + +Assuming you created this configuration in a temporary file named +`/tmp/hb_mysql.xml`, you would add this resource group to the cluster +configuration using the following command (on any cluster node): +indexterm:[Heartbeat]indexterm:[cibadmin (Heartbeat command)] +---------------------------- +cibadmin -o resources -C -x /tmp/hb_mysql.xml +---------------------------- + +After this, Heartbeat will automatically propagate the +newly-configured resource group to all cluster nodes. + +[[s-heartbeat-crm-drbd-ocf-ra]] +====== Using the `drbd` OCF resource agent in a Heartbeat CRM configuration + +The `drbd` resource agent is a "pure-bred" OCF RA which provides +Master/Slave capability, allowing Heartbeat to start and monitor the +DRBD resource on multiple nodes and promoting and demoting as +needed. You must, however, understand that the `drbd` RA disconnects +and detaches all DRBD resources it manages on Heartbeat shutdown, and +also upon enabling standby mode for a node. + +In order to enable a DRBD-backed configuration for a MySQL database in +a Heartbeat CRM cluster with the `drbd` OCF resource agent, you must +create both the necessary resources, and Heartbeat constraints to +ensure your service only starts on a previously promoted DRBD +resource. It is recommended that you start with the constraints, such +as shown in this example: + +[source,xml] +---------------------------- + + + + +---------------------------- + +Assuming you put these settings in a file named +`/tmp/constraints.xml`, here is how you would enable them: +---------------------------- +cibadmin -U -x /tmp/constraints.xml +---------------------------- + +Subsequently, you would create your relevant resources: + +[source,xml] +---------------------------- + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +---------------------------- + +Assuming you put these settings in a file named `/tmp/resources.xml`, +here is how you would enable them: + +---------------------------- +cibadmin -U -x /tmp/resources.xml +---------------------------- + +After this, your configuration should be enabled. Heartbeat now +selects a node on which it promotes the DRBD resource, and then starts +the DRBD-backed resource group on that same node. + +[[s-heartbeat-crm-manage]] +==== Managing Heartbeat CRM clusters + +[[s-heartbeat-crm-assume-resources]] +===== Assuming control of cluster resources + +A Heartbeat CRM cluster node may assume control of cluster resources +in the following ways: + +.Manual takeover of a single cluster resource +This is the approach normally taken if one simply wishes to test +resource migration, or move a resource to the local node as a means of +manual load balancing. This operation is performed using the following +command: indexterm:[Heartbeat]indexterm:[crm_resource (Heartbeat +command)] + +---------------------------- +crm_resource -r -M -H uname -n +---------------------------- + +[[n-heartbeat-crm-migrate]] +NOTE: The `-M` (or `--migrate`) option for the `crm_resource` command, +when used without the `-H` option, implies a resource migration _away_ +from the local host. You must initiate a migration _to_ the local host +by specifying the `-H` option, giving the local host name as the +option argument. + +It is also important to understand that the migration is _permanent_, +that is, unless told otherwise, Heartbeat will not move the resource +back to a node it was previously migrated away from — even if that +node happens to be the only surviving node in a near-cluster-wide +system failure. This is undesirable under most circumstances. So, it +is prudent to immediately "un-migrate" resources after successful +migration, using the following command: +indexterm:[Heartbeat]indexterm:[crm_resource (Heartbeat command)] + +---------------------------- +crm_resource -r -U +---------------------------- + +Finally, it is important to know that during resource migration, +Heartbeat may simultaneously migrate resources other than the one +explicitly specified (as required by existing resource groups or +colocation and order constraints). + +.Manual takeover of all cluster resources +This procedure involves switching the peer node to standby mode (where +__ is the peer node's host name): +indexterm:[Heartbeat]indexterm:[crm_standby (Heartbeat command)] + +---------------------------- +crm_standby -U -v on +---------------------------- + + +[[s-heartbeat-crm-relinquish-resources]] +===== Relinquishing cluster resources + +A Heartbeat CRM cluster node may be forced to give up one or all of +its resources in several ways. + +.Giving up a single cluster resource +A node gives up control of a single resource when issued the following +command (note that <>apply here, too): +indexterm:[Heartbeat]indexterm:[crm_resource (Heartbeat command)] + +---------------------------- +crm_resource -r resource -M +---------------------------- + +If you want to migrate to a specific host, use this variant: + +---------------------------- +crm_resource -r resource -M -H hostname +---------------------------- + +However, the latter syntax is usually of little relevance to CRM +clusters using DRBD, DRBD being limited to two nodes (so the two +variants are, essentially, identical in meaning). + +.Switching a cluster node to standby mode +This is the approach normally taken if one simply wishes to test +resource migration, or perform some other activity that does not +require the node to leave the cluster. This operation is performed +using the following command: +indexterm:[Heartbeat]indexterm:[crm_standby (Heartbeat command)] + +---------------------------- +crm_standby -U `uname -n` -v on +---------------------------- + +.Shutting down the local cluster manager instance +This approach is suited for local maintenance operations such as +software updates which require that the node be temporarily removed +from the cluster, but which do not necessitate a system reboot. The +procedure is <>. + +.Shutting down the local node +For hardware maintenance or other interventions that require a system +shutdown or reboot, use a simple graceful shutdown command, just as +previously outlined <>. + + +[[s-heartbeat-dopd]] +=== Using Heartbeat with `dopd` + +indexterm:[dopd]The steps outlined in this section enable DRBD to deny +services access to <>. The Heartbeat +component that implements this functionality is the _DRBD outdate-peer +daemon_, or `dopd` for short. It works, and uses identical +configuration, on both <>and +<> clusters. + +IMPORTANT: It is absolutely vital to configure at least two independent +<>for `dopd` to work correctly. + + + +[[s-dopd-heartbeat-config]] +==== Heartbeat configuration + +To enable dopd, you must add these lines to your indexterm:[ha.cf +(Heartbeat configuration file)]`/etc/ha.d/ha.cf` file: + +[source,drbd] +---------------------------- +respawn hacluster /usr/lib/heartbeat/dopd +apiauth dopd gid=haclient uid=hacluster +---------------------------- + +You may have to adjust ``dopd``'s path according to your preferred +distribution. On some distributions and architectures, the correct +path is `/usr/lib64/heartbeat/dopd`. + +After you have made this change and copied `ha.cf` to the peer node, +you must run `/etc/init.d/heartbeat reload` to have Heartbeat re-read +its configuration file. Afterwards, you should be able to verify that +you now have a running `dopd` process. + +NOTE: You can check for this process either by running `ps ax | grep +dopd` or by issuing `killall -0 dopd`. + + + +[[s-dopd-drbd-config]] +==== DRBD Configuration + +Then, add these items to your DRBD resource configuration: + +[source,drbd] +---------------------------- +resource { + handlers { + fence-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5"; + ... + } + disk { + fencing resource-only; + ... + } + ... +} +---------------------------- + +As with `dopd`, your distribution may place the `drbd-peer-outdater` +binary in `/usr/lib64/heartbeat` depending on your system +architecture. + +Finally, copy your `drbd.conf` to the peer node and issue `drbdadm +adjust resource` to reconfigure your resource and reflect your +changes. + +[[s-dopd-test]] +==== Testing `dopd` functionality + +To test whether your `dopd` setup is working correctly, interrupt the +replication link of a configured and connected resource while +Heartbeat services are running normally. You may do so simply by +physically unplugging the network link, but that is fairly +invasive. Instead, you may insert a temporary `iptables` rule to drop +incoming DRBD traffic to the TCP port used by your resource. + +After this, you will be able to observe the resource +<> change from +indexterm:[connection state]indexterm:[Connected (connection state)] +_Connected_ to indexterm:[connection state]indexterm:[WFConnection +(connection state)]_WFConnection_. Allow a few seconds to pass, and +you should see the <>become indexterm:[disk +state]indexterm:[Outdated (disk state)]__Outdated__/__DUnknown__. That is +what `dopd` is responsible for. + +Any attempt to switch the outdated resource to the primary role will +fail after this. + +When re-instituting network connectivity (either by plugging the +physical link or by removing the temporary `iptables` rule you inserted +previously), the connection state will change to _Connected_, and then +promptly to _SyncTarget_ (assuming changes occurred on the primary node +during the network interruption). Then you will be able to observe a +brief synchronization period, and finally, the previously outdated +resource will be marked as indexterm:[disk state]indexterm:[UpToDate +(disk state)]_UpToDate_ again. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/install-packages.adoc drbd-doc-8.4~20220106/UG8.4/en/install-packages.adoc --- drbd-doc-8.4~20151102/UG8.4/en/install-packages.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/install-packages.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,125 @@ +[[ch-install-packages]] +== Installing pre-built DRBD binary packages + + +[[s-linbit-packages]] +=== Packages supplied by LINBIT + +LINBIT, the DRBD project's sponsor company, provides DRBD binary +packages to its commercial support customers. These packages are +available at http://www.linbit.com/support/ and are considered +"official" DRBD builds. + +These builds are available for the following distributions: + +* Red Hat Enterprise Linux (RHEL), versions 5, 6, and 7 + +* SUSE Linux Enterprise Server (SLES), versions 11SP4, and 12 + +* Debian GNU/Linux, 8 (jessie), and 9 (stretch) + +* Ubuntu Server Edition LTS 16.04 (Xenial Xerus), and LTS 18.04 (Bionic Beaver). + +LINBIT releases binary builds in parallel with any new DRBD source +release. + +Package installation on RPM-based systems (SLES, RHEL) is done by +simply invoking `rpm -i` (for new installations) or `rpm -U` (for +upgrades), along with the corresponding package names. + +For Debian-based systems (Debian GNU/Linux, Ubuntu) systems, +`drbd8-utils` and `drbd8-module` packages are installed with `dpkg -i`, +or `gdebi` if available. + + +[[s-distro-packages]] +=== Packages supplied by distribution vendors + +A number of distributions include DRBD, including pre-built binary +packages. Support for these builds, if any, is being provided by the +associated distribution vendor. Their release cycle may lag behind +DRBD source releases. + +[[s-suse_linux_enterprise_server]] +==== SUSE Linux Enterprise Server + +SUSE Linux Enterprise Server (SLES), includes DRBD 0.7 in versions 9 +and 10. DRBD 8.3 is included in SLES 11 High Availability Extension +(HAE) SP1. + +On SLES, DRBD is normally installed via the software installation +component of YaST2. It comes bundled with the High Availability +package selection. + +Users who prefer a command line install may simply issue: + +--------------------------------------- +yast -i drbd +--------------------------------------- + +or + +--------------------------------------- +zypper install drbd +--------------------------------------- + + +[[s-_debian_gnu_linux]] +==== Debian GNU/Linux + +Debian GNU/Linux includes DRBD 8 from the 5.0 release (`lenny`) +onwards. In 6.0 (`squeeze`), which is based on a 2.6.32 Linux kernel, +Debian ships a backported version of DRBD. + +On `squeeze`, since DRBD is already included with the stock kernel, +all that is needed to install is the `drbd8-utils` package: + +--------------------------------------- +apt-get install drbd8-utils +--------------------------------------- + +On `lenny` (obsolete), you install DRBD by issuing: + +--------------------------------------- +apt-get install drbd8-utils drbd8-module +--------------------------------------- + +[[s-centos]] +==== CentOS + +CentOS has had DRBD 8 since release 5. + +DRBD can be installed using `yum` (note that you will need the +`extras` repository (or EPEL / ELRepo) enabled for this to work): + +--------------------------------------- +yum install drbd kmod-drbd +--------------------------------------- + + +[[s-ubuntu_linux]] +==== Ubuntu Linux + +To install DRBD on Ubuntu, you issue these commands: + +--------------------------------------- +apt-get update +apt-get install drbd8-utils +--------------------------------------- + +On (very) old Ubuntu versions you might need to explicitly install +`drbd8-module`, too; in newer versions the default kernel already includes the +upstream DRBD version. + +[[s-from-source]] +=== Compiling packages from source + +Releases generated by git tags on https://github.com/LINBIT[github] are snapshots of the git repository at the +given time. You most likely do not want to use these. They might lack things such as generated man pages, the +`configure` script, and other generated files. If you want to build from a tarball, use the ones +https://www.linbit.com/en/drbd-community/drbd-download[provided by us]. + +All our projects contain standard build scripts (e.g., `Makefile`, `configure`). Maintaining specific +information per distribution (e.g., documenting broken build macros) is too cumbersome, and historically the +information provided in this section got outdated quickly. If you don't know how to build software the +standard way, please consider using packages provided by LINBIT. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/internals.adoc drbd-doc-8.4~20220106/UG8.4/en/internals.adoc --- drbd-doc-8.4~20151102/UG8.4/en/internals.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/internals.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,526 @@ +[[ch-internals]] +== DRBD Internals + +This chapter gives _some_ background information about some of DRBD's +internal algorithms and structures. It is intended for interested +users wishing to gain a certain degree of background knowledge about +DRBD. It does not dive into DRBD's inner workings deep enough to be a +reference for DRBD developers. For that purpose, please refer to the +papers listed in <>, and of course to the comments in +the DRBD source code. + +[[s-metadata]] +=== DRBD meta data + +indexterm:[meta data]DRBD stores various pieces of information about +the data it keeps in a dedicated area. This metadata includes: + +* the size of the DRBD device, +* the Generation Identifier ( GI, described in detail in <>), +* the Activity Log ( AL, described in detail in <>). +* the quick-sync bitmap (described in detail in <>), + +This metadata may be stored _internally_ and _externally_. Which method +is used is configurable on a per-resource basis. + +[[s-internal-meta-data]] +==== Internal meta data + +indexterm:[meta data]Configuring a resource to use internal meta data +means that DRBD stores its meta data on the same physical lower-level +device as the actual production data. It does so by setting aside an +area at the _end_ of the device for the specific purpose of storing +metadata. + +.Advantage +Since the meta data are inextricably linked with the actual data, no +special action is required from the administrator in case of a hard +disk failure. The meta data are lost together with the actual data and +are also restored together. + +.Disadvantage +In case of the lower-level device being a single physical hard disk +(as opposed to a RAID set), internal meta data may negatively affect +write throughput. The performance of write requests by the application +may trigger an update of the meta data in DRBD. If the meta data are +stored on the same magnetic disk of a hard disk, the write operation +may result in two additional movements of the write/read head of the +hard disk. + +CAUTION: If you are planning to use internal meta data in conjunction +with an existing lower-level device that already has data which you +wish to preserve, you _must_ account for the space required by DRBD's +meta data. + +Otherwise, upon DRBD resource creation, the newly created metadata +would overwrite data at the end of the lower-level device, potentially +destroying existing files in the process. To avoid that, you must do +one of the following things: + +* Enlarge your lower-level device. This is possible with any logical + volume management facility (such as indexterm:[LVM]LVM) as long as + you have free space available in the corresponding volume groupIt + may also be supported by hardware storage solutions. + +* Shrink your existing file system on your lower-level device. This + may or may not be supported by your file system. + +* If neither of the two are possible, use + <> instead. + +To estimate the amount by which you must enlarge your lower-level +device our shrink your file system, see <>. + +[[s-external-meta-data]] +==== External meta data + +indexterm:[meta data]External meta data is simply stored on a +separate, dedicated block device distinct from that which holds your +production data. + +.Advantage +For some write operations, using external meta data produces a +somewhat improved latency behavior. + +.Disadvantage +Meta data are not inextricably linked with the actual production +data. This means that manual intervention is required in the case of a +hardware failure destroying just the production data (but not DRBD +meta data), to effect a full data sync from the surviving node onto +the subsequently replaced disk. + +Use of external meta data is also the only viable option if _all_ of +the following apply: + +* You are using DRBD to duplicate an existing device that already + contains data you wish to preserve, _and_ + +* that existing device does not support enlargement, _and_ + +* the existing file system on the device does not support shrinking. + +To estimate the required size of the block device dedicated to hold +your device meta data, see <>. + +NOTE: External meta data requires a minimum of a 1MB device size. + +[[s-meta-data-size]] +==== Estimating meta data size + +indexterm:[meta data]You may calculate the exact space requirements +for DRBD's meta data using the following formula: + +[[eq-metadata-size-exact]] +.Calculating DRBD meta data size (exactly) +image::images/metadata-size-exact.svg[] + +*_C~s~_* is the data device size in sectors, and *_N_* is the number of peers. + +NOTE: You may retrieve the device size by issuing `blockdev --getsz +`. + +The result, *_M~s~_*, is also expressed in sectors. To convert to MB, +divide by 2048 (for a 512-byte sector size, which is the default on +all Linux platforms except s390). + +In practice, you may use a reasonably good approximation, given +below. Note that in this formula, the unit is megabytes, not sectors: + +[[eq-metadata-size-approx]] +.Estimating DRBD meta data size (approximately) +image::images/metadata-size-approx.svg[] + +[[s-gi]] +=== Generation Identifiers + +indexterm:[generation identifiers]DRBD uses _generation identifiers_ +(GIs) to identify "generations"of replicated data. + +This is DRBD's internal mechanism used for + +* determining whether the two nodes are in fact members of the same + cluster (as opposed to two nodes that were connected accidentally), + +* determining the direction of background re-synchronization (if + necessary), + +* determining whether full re-synchronization is necessary or whether + partial re-synchronization is sufficient, + +* indexterm:[split brain]identifying split brain. + +[[s-data-generations]] +==== Data generations + +DRBD marks the start of a new _data generation_ at each of the +following occurrences: + +* The initial device full sync, + +* a disconnected resource switching to the primary role, + +* a resource in the primary role disconnecting. + +Thus, we can summarize that whenever a resource is in the _Connected_ +connection state, and both nodes' disk state is _UpToDate_, the +current data generation on both nodes is the same. The inverse is also +true. Note that the current implementation uses the lowest bit to encode the +role of the node (Primary/Secondary). Therefore, the lowest bit might be +different on distinct nodes even if they are considered to have the same data +generation. + +Every new data generation is identified by a 8-byte, universally +unique identifier (UUID). + +[[s-gi-tuple]] +==== The generation identifier tuple + +DRBD keeps four pieces of information about current and historical +data generations in the local resource meta data: + +.Current UUID +This is the generation identifier for the current data generation, as +seen from the local node's perspective. When a resource is +_Connected_ and fully synchronized, the current UUID is identical +between nodes. + +.Bitmap UUID +This is the UUID of the generation against which the on-disk sync +bitmap is tracking changes. As the on-disk sync bitmap itself, this +identifier is only relevant while in disconnected mode. If the +resource is _Connected_, this UUID is always empty (zero). + +.Two Historical UUIDs +These are the identifiers of the two data generations preceding the +current one. + +Collectively, these four items are referred to as the _generation +identifier tuple_, or GI tuple" for short. + +[[s-gi-changes]] +==== How generation identifiers change + +[[s-gi-changes-newgen]] +===== Start of a new data generation + +When a node loses connection to its peer (either by network failure or +manual intervention), DRBD modifies its local generation identifiers +in the following manner: + +[[f-gi-changes-newgen]] +.GI tuple changes at start of a new data generation +image::images/gi-changes-newgen.svg[] + +. A new UUID is created for the new data generation. This becomes the + new current UUID for the primary node. + +. The previous UUID now refers to the generation the bitmap is + tracking changes against, so it becomes the new bitmap UUID for the + primary node. + +. On the secondary node, the GI tuple remains unchanged. + +[[s-gi-changes-syncstart]] +===== Start of re-synchronization + +Upon the initiation of re-synchronization, DRBD performs these +modifications on the local generation identifiers: + +[[f-gi-changes-syncstart]] +.GI tuple changes at start of re-synchronization +image::images/gi-changes-syncstart.svg[] + +. The current UUID on the synchronization source remains unchanged. + +. The bitmap UUID on the synchronization source is rotated out to the + first historical UUID. + +. A new bitmap UUID is generated on the synchronization source. + +. This UUID becomes the new current UUID on the synchronization + target. + +. The bitmap and historical UUID's on the synchronization target + remain unchanged. + + +[[s-gi-changes-synccomplete]] +===== Completion of re-synchronization + +When re-synchronization concludes, the following changes are +performed: + +[[f-gi-changes-synccomplete]] +.GI tuple changes at completion of re-synchronization +image::images/gi-changes-synccomplete.svg[] + +. The current UUID on the synchronization source remains unchanged. + +. The bitmap UUID on the synchronization source is rotated out to the + first historical UUID, with that UUID moving to the second + historical entry (any existing second historical entry is + discarded). + +. The bitmap UUID on the synchronization source is then emptied + (zeroed). + +. The synchronization target adopts the entire GI tuple from the + synchronization source. + + +[[s-gi-use]] +==== How DRBD uses generation identifiers + +When a connection between nodes is established, the two nodes exchange +their currently available generation identifiers, and proceed +accordingly. A number of possible outcomes exist: + +.Current UUIDs empty on both nodes +The local node detects that both its current UUID and the peer's +current UUID are empty. This is the normal occurrence for a freshly +configured resource that has not had the initial full sync +initiated. No synchronization takes place; it has to be started +manually. + +.Current UUIDs empty on one node +The local node detects that the peer's current UUID is empty, and its +own is not. This is the normal case for a freshly configured resource +on which the initial full sync has just been initiated, the local node +having been selected as the initial synchronization source. DRBD now +sets all bits in the on-disk sync bitmap (meaning it considers the +entire device out-of-sync), and starts synchronizing as a +synchronization source. In the opposite case (local current UUID +empty, peer's non-empty), DRBD performs the same steps, except that +the local node becomes the synchronization target. + +.Equal current UUIDs +The local node detects that its current UUID and the peer's current +UUID are non-empty and equal. This is the normal occurrence for a +resource that went into disconnected mode at a time when it was in the +secondary role, and was not promoted on either node while +disconnected. No synchronization takes place, as none is necessary. + +.Bitmap UUID matches peer's current UUID +The local node detects that its bitmap UUID matches the peer's current +UUID, and that the peer's bitmap UUID is empty. This is the normal and +expected occurrence after a secondary node failure, with the local +node being in the primary role. It means that the peer never became +primary in the meantime and worked on the basis of the same data +generation all along. DRBD now initiates a normal, background +re-synchronization, with the local node becoming the synchronization +source. If, conversely, the local node detects that _its_ bitmap UUID +is empty, and that the _peer's_ bitmap matches the local node's current +UUID, then that is the normal and expected occurrence after a failure +of the local node. Again, DRBD now initiates a normal, background +re-synchronization, with the local node becoming the synchronization +target. + +.Current UUID matches peer's historical UUID +The local node detects that its current UUID matches one of the peer's +historical UUID's. This implies that while the two data sets share a +common ancestor, and the peer node has the up-to-date data, the +information kept in the peer node's bitmap is outdated and not +usable. Thus, a normal synchronization would be insufficient. DRBD +now marks the entire device as out-of-sync and initiates a full +background re-synchronization, with the local node becoming the +synchronization target. In the opposite case (one of the local node's +historical UUID matches the peer's current UUID), DRBD performs the +same steps, except that the local node becomes the synchronization +source. + +.Bitmap UUIDs match, current UUIDs do not +indexterm:[split brain]The local node detects that its current UUID +differs from the peer's current UUID, and that the bitmap UUID's +match. This is split brain, but one where the data generations have +the same parent. This means that DRBD invokes split brain +auto-recovery strategies, if configured. Otherwise, DRBD disconnects +and waits for manual split brain resolution. + +.Neither current nor bitmap UUIDs match +The local node detects that its current UUID differs from the peer's +current UUID, and that the bitmap UUID's _do not_ match. This is split +brain with unrelated ancestor generations, thus auto-recovery +strategies, even if configured, are moot. DRBD disconnects and waits +for manual split brain resolution. + +.No UUIDs match +Finally, in case DRBD fails to detect even a single matching element +in the two nodes' GI tuples, it logs a warning about unrelated data +and disconnects. This is DRBD's safeguard against accidental +connection of two cluster nodes that have never heard of each other +before. + + +[[s-activity-log]] +=== The Activity Log + +[[s-al-purpose]] +==== Purpose + +indexterm:[Activity Log]During a write operation DRBD forwards the +write operation to the local backing block device, but also sends the +data block over the network. These two actions occur, for all +practical purposes, simultaneously. Random timing behavior may cause a +situation where the write operation has been completed, but the +transmission via the network has not yet taken place. + +If, at this moment, the active node fails and fail-over is being +initiated, then this data block is out of sync between nodes -- it has +been written on the failed node prior to the crash, but replication +has not yet completed. Thus, when the node eventually recovers, this +block must be removed from the data set of during subsequent +synchronization. Otherwise, the crashed node would be "one write +ahead" of the surviving node, which would violate the "all or +nothing" principle of replicated storage. This is an issue that is not +limited to DRBD, in fact, this issue exists in practically all +replicated storage configurations. Many other storage solutions (just +as DRBD itself, prior to version 0.7) thus require that after a +failure of the active, that node must be fully synchronized anew after +its recovery. + +DRBD's approach, since version 0.7, is a different one. The _activity +log_ (AL), stored in the meta data area, keeps track of those blocks +that have "recently" been written to. Colloquially, these areas are +referred to as _hot extents_. + +If a temporarily failed node that was in active mode at the time of +failure is synchronized, only those hot extents highlighted in the AL +need to be synchronized, rather than the full device. This drastically +reduces synchronization time after an active node crash. + +[[s-active-extents]] +==== Active extents + +indexterm:[Activity Log]The activity log has a configurable parameter, +the number of active extents. Every active extent adds 4MiB to the +amount of data being retransmitted after a Primary crash. This +parameter must be understood as a compromise between the following +opposites: + +.Many active extents +Keeping a large activity log improves write throughput. Every time a +new extent is activated, an old extent is reset to inactive. This +transition requires a write operation to the meta data area. If the +number of active extents is high, old active extents are swapped out +fairly rarely, reducing meta data write operations and thereby +improving performance. + +.Few active extents +Keeping a small activity log reduces synchronization time after active +node failure and subsequent recovery. + + +[[s-suitable-al-size]] +==== Selecting a suitable Activity Log size + +indexterm:[Activity Log]The definition of the number of extents should +be based on the desired synchronization time at a given +synchronization rate. The number of active extents can be calculated +as follows: + +[[eq-al-extents]] +.Active extents calculation based on sync rate and target sync time +image::images/al-extents.svg[] + +_R_ is the synchronization rate, given in MB/s. _t~sync~_ is the target +synchronization time, in seconds. _E_ is the resulting number of active +extents. + +To provide an example, suppose our cluster has an I/O subsystem with a +throughput rate of 90 MiByte/s that was configured to a +synchronization rate of 30 MiByte/s (_R_=30), and we want to keep our +target synchronization time at 4 minutes or 240 seconds +(_t~sync~_=240): + +[[eq-al-extents-example]] +.Active extents calculation based on sync rate and target sync time (example) +image::images/al-extents-example.svg[] + +The exact result is 1800, but since DRBD's hash function for the +implementation of the AL works best if the number of extents is set to +a prime number, we select 1801. + +[[s-quick-sync-bitmap]] +=== The quick-sync bitmap + +indexterm:[quick-sync bitmap]indexterm:[bitmap (DRBD-specific +concept)]The quick-sync bitmap is the internal data structure which +DRBD uses, on a per-resource basis, to keep track of blocks being in +sync (identical on both nodes) or out-of sync. It is only relevant +when a resource is in disconnected mode. + +In the quick-sync bitmap, one bit represents a 4-KiB chunk of on-disk +data. If the bit is cleared, it means that the corresponding block is +still in sync with the peer node. That implies that the block has not +been written to since the time of disconnection. Conversely, if the +bit is set, it means that the block has been modified and needs to be +re-synchronized whenever the connection becomes available again. + +As DRBD detects write I/O on a disconnected device, and hence starts +setting bits in the quick-sync bitmap, it does so in RAM -- thus +avoiding expensive synchronous metadata I/O operations. Only when the +corresponding blocks turn cold (that is, expire from the +<>), DRBD makes the appropriate +modifications in an on-disk representation of the quick-sync +bitmap. Likewise, if the resource happens to be manually shut down on +the remaining node while disconnected, DRBD flushes the +_complete_ quick-sync bitmap out to persistent storage. + +When the peer node recovers or the connection is re-established, DRBD +combines the bitmap information from both nodes to determine the +_total data set_ that it must re-synchronize. Simultaneously, DRBD +<> to determine the +_direction_ of synchronization. + +The node acting as the synchronization source then transmits the +agreed-upon blocks to the peer node, clearing sync bits in the bitmap +as the synchronization target acknowledges the modifications. If the +re-synchronization is now interrupted (by another network outage, for +example) and subsequently resumed it will continue where it left off +-- with any additional blocks modified in the meantime being added to +the re-synchronization data set, of course. + +NOTE: Re-synchronization may be also be paused and resumed manually +with the `drbdadm pause-sync` and `drbdadm resume-sync` commands. You +should, however, not do so light-heartedly -- interrupting +re-synchronization leaves your secondary node's disk +_Inconsistent_ longer than necessary. + +[[s-fence-peer]] +=== The peer fencing interface + +DRBD has a defined interface for the mechanism that fences the peer +node in case of the replication link being interrupted. The +`drbd-peer-outdater` helper, bundled with Heartbeat, is the reference +implementation for this interface. However, you may easily implement +your own peer fencing helper program. + +The fencing helper is invoked only in case + +. a `fence-peer` handler has been defined in the resource's (or common) + `handlers` section, _and_ + +. the `fencing` option for the resource is set to either + `resource-only` or `resource-and-stonith` , _and_ + +. the replication link is interrupted long enough for DRBD to detect a + network failure. + +The program or script specified as the `fence-peer` handler, when it is +invoked, has the `DRBD_RESOURCE` and `DRBD_PEER` environment variables +available. They contain the name of the affected DRBD resource and the +peer's hostname, respectively. + +Any peer fencing helper program (or script) must return one of the +following exit codes: + +.`fence-peer` handler exit codes +[format="csv",separator=";",options="header"] +|======================================= +Exit code;Implication +3;Peer's disk state was already _Inconsistent_. +4;Peer's disk state was successfully set to _Outdated_ (or was _Outdated_ to begin with). +5;Connection to the peer node failed, peer could not be reached. +6;Peer refused to be outdated because the affected resource was in the primary role. +7;Peer node was successfully fenced off the cluster. This should never occur unless `fencing` is set to `resource-and-stonith` for the affected resource. +|======================================= diff -Nru drbd-doc-8.4~20151102/UG8.4/en/latency.adoc drbd-doc-8.4~20220106/UG8.4/en/latency.adoc --- drbd-doc-8.4~20151102/UG8.4/en/latency.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/latency.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,179 @@ +[[ch-latency]] + +== Optimizing DRBD latency + +This chapter deals with optimizing DRBD latency. It examines some +hardware considerations with regard to latency minimization, and +details tuning recommendations for that purpose. + +[[s-latency-hardware]] +=== Hardware considerations + +DRBD latency is affected by both the latency of the underlying I/O +subsystem (disks, controllers, and corresponding caches), and the +latency of the replication network. + +.I/O subsystem latency +indexterm:[latency]I/O subsystem latency is primarily a function of +disk rotation speed. Thus, using fast-spinning disks is a valid +approach for reducing I/O subsystem latency. + +Likewise, the use of a indexterm:[battery-backed write cache] +battery-backed write cache (BBWC) reduces write completion times, also +reducing write latency. Most reasonable storage subsystems come with +some form of battery-backed cache, and allow the administrator to +configure which portion of this cache is used for read and write +operations. The recommended approach is to disable the disk read cache +completely and use all cache memory available for the disk write +cache. + +.Network latency +indexterm:[latency]Network latency is, in essence, the packet +round-trip time ( ) between hosts. It is influenced by a number of +factors, most of which are irrelevant on the dedicated, back-to-back +network connections recommended for use as DRBD replication +links. Thus, it is sufficient to accept that a certain amount of +latency always exists in Gigabit Ethernet links, which typically is on +the order of 100 to 200 microseconds (μs) packet RTT. + +Network latency may typically be pushed below this limit only by using +lower-latency network protocols, such as running DRBD over Dolphin +Express using Dolphin SuperSockets. + +[[s-latency-overhead-expectations]] +=== Latency overhead expectations + +As for throughput, when estimating the latency overhead associated +with DRBD, there are some important natural limitations to consider: + +* DRBD latency is bound by that of the raw I/O subsystem. +* DRBD latency is bound by the available network latency. + +The _sum_ of the two establishes the theoretical latency _minimum_ +incurred to DRBD. DRBD then adds to that latency a slight additional +latency overhead, which can be expected to be less than 1 percent. + +* Consider the example of a local disk subsystem with a write latency + of 3ms and a network link with one of 0.2ms. Then the expected DRBD + latency would be 3.2 ms or a roughly 7-percent latency increase over + just writing to a local disk. + +NOTE: Latency may be influenced by a number of other factors, +including CPU cache misses, context switches, and others. + +[[s-latency-tuning]] +=== Tuning recommendations + +[[s-latency-tuning-cpu-mask]] +==== Setting DRBD's CPU mask + +DRBD allows for setting an explicit CPU mask for its kernel +threads. This is particularly beneficial for applications which would +otherwise compete with DRBD for CPU cycles. + +The CPU mask is a number in whose binary representation the least +significant bit represents the first CPU, the second-least significant +bit the second, and so forth. A set bit in the bitmask implies that +the corresponding CPU may be used by DRBD, whereas a cleared bit means +it must not. Thus, for example, a CPU mask of 1 (`00000001`) means +DRBD may use the first CPU only. A mask of 12 (`00001100`) implies +DRBD may use the third and fourth CPU. + +An example CPU mask configuration for a resource may look like this: + +[source,drbd] +---------------------------- +resource { + options { + cpu-mask 2; + ... + } + ... +} +---------------------------- + +IMPORTANT: Of course, in order to minimize CPU competition between +DRBD and the application using it, you need to configure your +application to use only those CPUs which DRBD does not use. + +Some applications may provide for this via an entry in a configuration +file, just like DRBD itself. Others include an invocation of the +`taskset` command in an application init script. + + +[[s-latency-tuning-mtu-size]] +==== Modifying the network MTU + +When a block-based (as opposed to extent-based) filesystem is layered +above DRBD, it may be beneficial to change the replication network's +maximum transmission unit (MTU) size to a value higher than the +default of 1500 bytes. Colloquially, this is referred to as +indexterm:[Jumbo frames] "enabling Jumbo frames". + +NOTE: Block-based file systems include ext3, ReiserFS (version 3), and +GFS. Extent-based file systems, in contrast, include XFS, Lustre and +OCFS2. Extent-based file systems are expected to benefit from enabling +Jumbo frames only if they hold few and large files. + +The MTU may be changed using the following commands: +---------------------------- +ifconfig mtu +---------------------------- +or +---------------------------- +ip link set mtu +---------------------------- + +__ refers to the network interface used for DRBD +replication. A typical value for __ would be 9000 (bytes). + +[[s-latency-tuning-deadline-scheduler]] +==== Enabling the `deadline` I/O scheduler + +When used in conjunction with high-performance, write back enabled +hardware RAID controllers, DRBD latency may benefit greatly from using +the simple `deadline` I/O scheduler, rather than the CFQ scheduler. The +latter is typically enabled by default in reasonably recent kernel +configurations (post-2.6.18 for most distributions). + +Modifications to the I/O scheduler configuration may be performed via +the `sysfs` virtual file system, mounted at `/sys`. The scheduler +configuration is in `/sys/block/`, where is the +backing device DRBD uses. + +Enabling the `deadline` scheduler works via the following command: + +---------------------------- +echo deadline > /sys/block//queue/scheduler +---------------------------- + +You may then also set the following values, which may provide +additional latency benefits: + +* Disable front merges: +---------------------------- +echo 0 > /sys/block//queue/iosched/front_merges +---------------------------- + +* Reduce read I/O deadline to 150 milliseconds (the default is 500ms): +---------------------------- +echo 150 > /sys/block//queue/iosched/read_expire +---------------------------- + +* Reduce write I/O deadline to 1500 milliseconds (the default is + 3000ms): +---------------------------- + echo 1500 > /sys/block//queue/iosched/write_expire +---------------------------- + +If these values effect a significant latency improvement, you may want +to make them permanent so they are automatically set at system +startup. indexterm:[Debian GNU/Linux]Debian and indexterm:[Ubuntu +Linux]Ubuntu systems provide this functionality via the +`sysfsutils` package and the `/etc/sysfs.conf` configuration file. + +You may also make a global I/O scheduler selection by passing the +`elevator` option via your kernel command line. To do so, edit your +boot loader configuration (normally found in `/boot/grub/menu.lst` if +you are using the GRUB bootloader) and add `elevator=deadline` to your +list of kernel boot options. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/lvm.adoc drbd-doc-8.4~20220106/UG8.4/en/lvm.adoc --- drbd-doc-8.4~20151102/UG8.4/en/lvm.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/lvm.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,536 @@ +[[ch-lvm]] +== Using LVM with DRBD + +indexterm:[LVM]indexterm:[Logical Volume Management]This chapter deals +with managing DRBD in conjunction with LVM2. In particular, it covers + +* using LVM Logical Volumes as backing devices for DRBD; + +* using DRBD devices as Physical Volumes for LVM; + +* combining these to concepts to implement a layered LVM approach + using DRBD. + +If you happen to be unfamiliar with these terms to begin with, +<> may serve as your LVM starting point -- although you +are always encouraged, of course, to familiarize yourself with LVM in +some more detail than this section provides. + +[[s-lvm-primer]] +=== LVM primer + +LVM2 is an implementation of logical volume management in the context +of the Linux device mapper framework. It has practically nothing in +common, other than the name and acronym, with the original LVM +implementation. The old implementation (now retroactively named +"LVM1") is considered obsolete; it is not covered in this section. + +When working with LVM, it is important to understand its most basic +concepts: + +.Physical Volume (PV) +indexterm:[LVM]indexterm:[Physical Volume (LVM)]A PV is an underlying +block device exclusively managed by LVM. PVs can either be entire hard +disks or individual partitions. It is common practice to create a +partition table on the hard disk where one partition is dedicated to +the use by the Linux LVM. + +NOTE: The partition type "Linux LVM" (signature `0x8E`) can be used to +identify partitions for exclusive use by LVM. This, however, is not +required -- LVM recognizes PVs by way of a signature written to the +device upon PV initialization. + +.Volume Group (VG) +indexterm:[LVM]indexterm:[Volume Group (LVM)]A VG is the basic +administrative unit of the LVM. A VG may include one or more several +PVs. Every VG has a unique name. A VG may be extended during runtime +by adding additional PVs, or by enlarging an existing PV. + +.Logical Volume (LV) +indexterm:[LVM]indexterm:[Logical Volume (LVM)]LVs may be created +during runtime within VGs and are available to the other parts of the +kernel as regular block devices. As such, they may be used to hold a +file system, or for any other purpose block devices may be used +for. LVs may be resized while they are online, and they may also be +moved from one PV to another (as long as the PVs are part of the same +VG). + +.Snapshot Logical Volume (SLV) +indexterm:[snapshots (LVM)]indexterm:[LVM]Snapshots are temporary +point-in-time copies of LVs. Creating snapshots is an operation that +completes almost instantly, even if the original LV (the _origin +volume_) has a size of several hundred GiByte. Usually, a snapshot +requires significantly less space than the original LV. + +[[f-lvm-overview]] +.LVM overview +image::images/lvm.svg[] + + +[[s-lvm-lv-as-drbd-backing-dev]] +=== Using a Logical Volume as a DRBD backing device + +indexterm:[LVM]indexterm:[Logical Volume (LVM)]Since an existing +Logical Volume is simply a block device in Linux terms, you may of +course use it as a DRBD backing device. To use LV's in this manner, +you simply create them, and then initialize them for DRBD as you +normally would. + +This example assumes that a Volume Group named `foo` already exists on +both nodes of on your LVM-enabled system, and that you wish to create +a DRBD resource named `r0` using a Logical Volume in that Volume +Group. + +First, you create the Logical Volume: +indexterm:[LVM]indexterm:[lvcreate (LVM command)] +---------------------------- +lvcreate --name bar --size 10G foo + Logical volume "bar" created +---------------------------- + +Of course, you must complete this command on both nodes of your DRBD +cluster. After this, you should have a block device named +`/dev/foo/bar` on either node. + +Then, you can simply enter the newly-created volumes in your resource +configuration: + +[source,drbd] +---------------------------- +resource r0 { + ... + on alice { + device /dev/drbd0; + disk /dev/foo/bar; + ... + } + on bob { + device /dev/drbd0; + disk /dev/foo/bar; + ... + } +} +---------------------------- + +Now you can <>, +just as you would if you were using non-LVM block devices. + +[[s-lvm-snapshots]] +=== Using automated LVM snapshots during DRBD synchronization + +While DRBD is synchronizing, the __SyncTarget__'s state is +_Inconsistent_ until the synchronization completes. If in this +situation the _SyncSource_ happens to fail (beyond repair), this puts +you in an unfortunate position: the node with good data is dead, and +the surviving node has bad data. + +When serving DRBD off an LVM Logical Volume, you can mitigate this +problem by creating an automated snapshot when synchronization starts, +and automatically removing that same snapshot once synchronization has +completed successfully. + +In order to enable automated snapshotting during resynchronization, +add the following lines to your resource configuration: + +.Automating snapshots before DRBD synchronization +---------------------------- +resource r0 { + handlers { + before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh"; + after-resync-target "/usr/lib/drbd/unsnapshot-resync-target-lvm.sh"; + } +} +---------------------------- + +The two scripts parse the `$DRBD_RESOURCE$` environment variable which +DRBD automatically passes to any `handler` it invokes. The +`snapshot-resync-target-lvm.sh` script then creates an LVM snapshot for +every volume the resources contains immediately before synchronization +kicks off. In case the script fails, the synchronization _does not +commence_. + +Once synchronization completes, the `unsnapshot-resync-target-lvm.sh` +script removes the snapshot, which is then no longer needed. In case +unsnapshotting fails, the snapshot continues to linger around. + +IMPORTANT: You should review dangling snapshots as soon as +possible. A full snapshot causes both the snapshot itself _and its +origin volume_ to fail. + +If at any time your _SyncSource_ does fail beyond repair and you +decide to revert to your latest snapshot on the peer, you may do so by +issuing the `lvconvert -M` command. + +[[s-lvm-drbd-as-pv]] +=== Configuring a DRBD resource as a Physical Volume + +indexterm:[LVM]indexterm:[Physical Volume (LVM)]In order to prepare a +DRBD resource for use as a Physical Volume, it is necessary to create +a PV signature on the DRBD device. In order to do so, issue one of the +following commands on the node where the resource is currently in the +primary role: indexterm:[LVM]indexterm:[pvcreate (LVM command)] + +---------------------------- +# pvcreate /dev/drbdX +---------------------------- + +or + +---------------------------- +# pvcreate /dev/drbd/by-res//0 +---------------------------- + +NOTE: This example assumes a single-volume resource. + +Now, it is necessary to include this device in the list of devices LVM +scans for PV signatures. In order to do this, you must edit the LVM +configuration file, normally named +indexterm:[LVM]`/etc/lvm/lvm.conf`. Find the line in the +`devices` section that contains the `filter` keyword and edit it +accordingly. If _all_ your PVs are to be stored on DRBD devices, the +following is an appropriate `filter` option: +indexterm:[LVM]indexterm:[filter expression (LVM)] + +[source,drbd] +---------------------------- +filter = [ "a|drbd.*|", "r|.*|" ] +---------------------------- + +This filter expression accepts PV signatures found on any DRBD +devices, while rejecting (ignoring) all others. + +NOTE: By default, LVM scans all block devices found in `/dev` for PV +signatures. This is equivalent to `filter = [ "a|.*|" ]`. + +If you want to use stacked resources as LVM PVs, then you will need a +more explicit filter configuration. You need to make sure that LVM +detects PV signatures on stacked resources, while ignoring them on the +corresponding lower-level resources and backing devices. This example +assumes that your lower-level DRBD resources use device minors 0 +through 9, whereas your stacked resources are using device minors from +10 upwards: + +[source,drbd] +---------------------------- +filter = [ "a|drbd1[0-9]|", "r|.*|" ] +---------------------------- + +This filter expression accepts PV signatures found only on the DRBD +devices `/dev/drbd10` through `/dev/drbd19`, while rejecting +(ignoring) all others. + +After modifying the `lvm.conf` file, you must run the +indexterm:[LVM]indexterm:[vgscan (LVM command)]`vgscan` command so LVM +discards its configuration cache and re-scans devices for PV +signatures. + +You may of course use a different `filter` configuration to match your +particular system configuration. What is important to remember, +however, is that you need to + +* Accept (include) the DRBD devices you wish to use as PVs; +* Reject (exclude) the corresponding lower-level devices, so as to + avoid LVM finding duplicate PV signatures. + +In addition, you should disable the LVM cache by setting: + +[source,drbd] +---------------------------- +write_cache_state = 0 +---------------------------- + +After disabling the LVM cache, make sure you remove any stale cache +entries by deleting `/etc/lvm/cache/.cache`. + +You must repeat the above steps on the peer node. + +IMPORTANT: If your system has its root filesystem on LVM, Volume +Groups will be activated from your initial ramdisk (initrd) during +boot. In doing so, the LVM tools will evaluate an `lvm.conf` file +included in the initrd image. Thus, after you make any changes to your +`lvm.conf`, you should be certain to update your initrd with the +utility appropriate for your distribution (`mkinitrd`, +`update-initramfs` etc.). + +When you have configured your new PV, you may proceed to add it to a +Volume Group, or create a new Volume Group from it. The DRBD resource +must, of course, be in the primary role while doing +so. indexterm:[LVM]indexterm:[vgcreate (LVM command)] + +---------------------------- +# vgcreate /dev/drbdX +---------------------------- + +NOTE: While it is possible to mix DRBD and non-DRBD Physical Volumes +within the same Volume Group, doing so is not recommended and unlikely +to be of any practical value. + +When you have created your VG, you may start carving Logical Volumes +out of it, using the indexterm:[LVM]indexterm:[lvcreate (LVM +command)]`lvcreate` command (as with a non-DRBD-backed Volume Group). + +[[s-lvm-add-pv]] +=== Adding a new DRBD volume to an existing Volume Group + +Occasionally, you may want to add new DRBD-backed Physical Volumes to +a Volume Group. Whenever you do so, a new volume should be added to an +existing resource configuration. This preserves the replication stream +and ensures write fidelity across all PVs in the VG. + +IMPORTANT: if your LVM volume group is managed by Pacemaker as +explained in <>, it is _imperative_ to place the +cluster in maintenance mode prior to making changes to the DRBD +configuration. + +Extend your resource configuration to include an additional volume, as +in the following example: + +------------------------------------- +resource r0 { + volume 0 { + device /dev/drbd1; + disk /dev/sda7; + meta-disk internal; + } + volume 1 { + device /dev/drbd2; + disk /dev/sda8; + meta-disk internal; + } + on alice { + address 10.1.1.31:7789; + } + on bob { + address 10.1.1.32:7789; + } +} +------------------------------------- + +Make sure your DRBD configuration is identical across nodes, then +issue: + +------------------------------------- +# drbdadm adjust r0 +------------------------------------- + +This will implicitly call `drbdsetup new-minor r0 1` to enable the new volume `1` in the resource `r0`. Once the new +volume has been added to the replication stream, you may initialize +and add it to the volume group: + +------------------------------------- +# pvcreate /dev/drbd/by-res//1 +# vgextend /dev/drbd/by-res//1 +------------------------------------- + +This will add the new PV `/dev/drbd/by-res//1` to the +`` VG, preserving write fidelity across the entire VG. + + +[[s-nested-lvm]] +=== Nested LVM configuration with DRBD + +It is possible, if slightly advanced, to both use +indexterm:[LVM]indexterm:[Logical Volume (LVM)]Logical Volumes as +backing devices for DRBD _and_ at the same time use a DRBD device +itself as a indexterm:[LVM]indexterm:[Physical Volume (LVM)]Physical +Volume. To provide an example, consider the following configuration: + +* We have two partitions, named `/dev/sda1`, and `/dev/sdb1`, which we + intend to use as Physical Volumes. + +* Both of these PVs are to become part of a Volume Group named + `local`. + +* We want to create a 10-GiB Logical Volume in this VG, to be named `r0`. + +* This LV will become the local backing device for our DRBD resource, + also named `r0`, which corresponds to the device `/dev/drbd0`. + +* This device will be the sole PV for another Volume Group, named + `replicated`. + +* This VG is to contain two more logical volumes named ``foo``(4 GiB) + and ``bar``(6 GiB). + +In order to enable this configuration, follow these steps: + +* Set an appropriate `filter` option in your `/etc/lvm/lvm.conf`: ++ +-- +indexterm:[LVM]indexterm:[filter expression (LVM)] +[source,drbd] +---------------------------- +filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"] +---------------------------- + +This filter expression accepts PV signatures found on any SCSI and +DRBD devices, while rejecting (ignoring) all others. + +After modifying the `lvm.conf` file, you must run the +indexterm:[LVM]indexterm:[vgscan (LVM command)]`vgscan` command so LVM +discards its configuration cache and re-scans devices for PV +signatures. +-- + + +* Disable the LVM cache by setting: ++ +-- +[source,drbd] +---------------------------- +write_cache_state = 0 +---------------------------- + +After disabling the LVM cache, make sure you remove any stale cache +entries by deleting `/etc/lvm/cache/.cache`. +-- + +* Now, you may initialize your two SCSI partitions as PVs: + indexterm:[LVM]indexterm:[pvcreate (LVM command)] ++ +---------------------------- +# pvcreate /dev/sda1 +Physical volume "/dev/sda1" successfully created +# pvcreate /dev/sdb1 +Physical volume "/dev/sdb1" successfully created +---------------------------- + +* The next step is creating your low-level VG named `local`, +consisting of the two PVs you just initialized: +indexterm:[LVM]indexterm:[vgcreate (LVM command)] ++ +---------------------------- +# vgcreate local /dev/sda1 /dev/sda2 +Volume group "local" successfully created +---------------------------- + +* Now you may create your Logical Volume to be used as DRBD's backing + device: indexterm:[LVM]indexterm:[lvcreate (LVM command)] ++ +---------------------------- +# lvcreate --name r0 --size 10G local +Logical volume "r0" created +---------------------------- + +* Repeat all steps, up to this point, on the peer node. + +* Then, edit your `/etc/drbd.conf` to create a new resource named `r0`: + indexterm:[drbd.conf] ++ +-- +[source,drbd] +---------------------------- +resource r0 { + device /dev/drbd0; + disk /dev/local/r0; + meta-disk internal; + on { address
:; } + on { address
:; } +} +---------------------------- + +After you have created your new resource configuration, be sure to +copy your `drbd.conf` contents to the peer node. +-- + +* After this, initialize your resource as described in + <>(on both nodes). + +* Then, promote your resource (on one node): indexterm:[drbdadm] ++ +---------------------------- +# drbdadm primary r0 +---------------------------- + +* Now, on the node where you just promoted your resource, initialize + your DRBD device as a new Physical Volume: ++ +-- +indexterm:[LVM]indexterm:[pvcreate (LVM command)] +---------------------------- +# pvcreate /dev/drbd0 +Physical volume "/dev/drbd0" successfully created +---------------------------- +-- + +* Create your VG named `replicated`, using the PV you just + initialized, on the same node: indexterm:[LVM]indexterm:[vgcreate + (LVM command)] ++ +---------------------------- +# vgcreate replicated /dev/drbd0 +Volume group "replicated" successfully created +---------------------------- + +* Finally, create your new Logical Volumes within this newly-created ++ +-- +VG: indexterm:[LVM]indexterm:[lvcreate (LVM command)] +---------------------------- +# lvcreate --name foo --size 4G replicated +Logical volume "foo" created +# lvcreate --name bar --size 6G replicated +Logical volume "bar" created +---------------------------- +-- + +The Logical Volumes `foo` and `bar` will now be available as +`/dev/replicated/foo` and `/dev/replicated/bar` on the local node. + +[[s-switching_the_vg_to_the_other_node]] +==== Switching the VG to the other node ==== + +To make them available on the other node, first issue the following +sequence of commands on the primary node: +indexterm:[LVM]indexterm:[vgchange (LVM command)] + +---------------------------- +# vgchange -a n replicated +0 logical volume(s) in volume group "replicated" now active +# drbdadm secondary r0 +---------------------------- + + +Then, issue these commands on the other (still secondary) node: +indexterm:[drbdadm]indexterm:[LVM]indexterm:[vgchange (LVM command)] + +---------------------------- +# drbdadm primary r0 +# vgchange -a y replicated +2 logical volume(s) in volume group "replicated" now active +---------------------------- + +After this, the block devices `/dev/replicated/foo` and +`/dev/replicated/bar` will be available on the other (now primary) node. + +[[s-lvm-pacemaker]] + +=== Highly available LVM with Pacemaker + +The process of transferring volume groups between peers and making the +corresponding logical volumes available can be automated. The +Pacemaker `LVM` resource agent is designed for exactly that purpose. + +In order to put an existing, DRBD-backed volume group under Pacemaker +management, run the following commands in the `crm` shell: + +.Pacemaker configuration for DRBD-backed LVM Volume Group +---------------------------- +primitive p_drbd_r0 ocf:linbit:drbd \ + params drbd_resource="r0" \ + op monitor interval="29s" role="Master" \ + op monitor interval="31s" role="Slave" +ms ms_drbd_r0 p_drbd_r0 \ + meta master-max="1" master-node-max="1" \ + clone-max="2" clone-node-max="1" \ + notify="true" +primitive p_lvm_r0 ocf:heartbeat:LVM \ + params volgrpname="r0" +colocation c_lvm_on_drbd inf: p_lvm_r0 ms_drbd_r0:Master +order o_drbd_before_lvm inf: ms_drbd_r0:promote p_lvm_r0:start +commit +---------------------------- + +After you have committed this configuration, Pacemaker will +automatically make the `r0` volume group available on whichever node +currently has the Primary (Master) role for the DRBD resource. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/Makefile drbd-doc-8.4~20220106/UG8.4/en/Makefile --- drbd-doc-8.4~20151102/UG8.4/en/Makefile 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/Makefile 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,4 @@ +lang=en + +include ../../UG-build-adoc.mk +include ../../UG-build.mk diff -Nru drbd-doc-8.4~20151102/UG8.4/en/man-pages.adoc drbd-doc-8.4~20220106/UG8.4/en/man-pages.adoc --- drbd-doc-8.4~20151102/UG8.4/en/man-pages.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/man-pages.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,10 @@ +[[ap-man-pages]] +[appendix] +== DRBD system manual pages + +++++++++++++++++++++++++++ + + + + +++++++++++++++++++++++++++ diff -Nru drbd-doc-8.4~20151102/UG8.4/en/more-info.adoc drbd-doc-8.4~20220106/UG8.4/en/more-info.adoc --- drbd-doc-8.4~20151102/UG8.4/en/more-info.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/more-info.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,64 @@ +[[ch-more-info]] +== Getting more information + +[[s-commercial-support]] +=== Commercial DRBD support + +Commercial DRBD support, consultancy, and training services are +available from the project's sponsor company, +http://www.linbit.com/[LINBIT]. + +[[s-mailing-list]] +=== Public mailing list + +The public mailing list for general usage questions regarding DRBD is +drbd-user@lists.linbit.com. This is a subscribers-only mailing list, +you may subscribe at https://lists.linbit.com/listinfo/drbd-user. A complete +list archive is available at +https://lists.linbit.com/pipermail/drbd-user. + +[[s-irc-channels]] +=== Public IRC Channels + +Some of the DRBD developers can occasionally be found on the +`irc.freenode.net` public IRC server, particularly in the following +channels: + +* `#drbd`, +* `#linux-ha`, +* `#linux-cluster`. + +Getting in touch on IRC is a good way of discussing suggestions for +improvements in DRBD, and having developer level discussions. + +[[s-twitter-account]] +=== Official Twitter account + +http://www.linbit.com/[LINBIT] maintains an official +http://twitter.com/linbit[twitter account]. + +If you tweet about DRBD, please include the `#drbd` hashtag. + +[[s-publications]] +=== Publications + +DRBD's authors have written and published a number of papers on DRBD +in general, or a specific aspect of DRBD. Here is a short selection: + +[bibliography] +- Lars Ellenberg. 'DRBD v8.0.x and beyond'. 2007. Available at + http://www.drbd.org/fileadmin/drbd/publications/drbd8.linux-conf.eu.2007.pdf +- Philipp Reisner. 'DRBD v8 - Replicated Storage with Shared Disk + Semantics'. 2007. Available at + http://www.drbd.org/fileadmin/drbd/publications/drbd8.pdf. +- Philipp Reisner. 'Rapid resynchronization for replicated + storage'. 2006. Available at + http://www.drbd.org/fileadmin/drbd/publications/drbd-activity-logging_v6.pdf + +[[s-useful-resources]] +=== Other useful resources + +* Wikipedia keeps http://en.wikipedia.org/wiki/DRBD[an entry on DRBD]. +* Both the http://wiki.linux-ha.org/[Linux-HA wiki] and +* http://www.clusterlabs.org[ClusterLabs] have some useful information + about utilizing DRBD in High Availability clusters. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/ocfs2.adoc drbd-doc-8.4~20220106/UG8.4/en/ocfs2.adoc --- drbd-doc-8.4~20151102/UG8.4/en/ocfs2.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/ocfs2.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,378 @@ +[[ch-ocfs2]] +== Using OCFS2 with DRBD + +indexterm:[OCFS2]indexterm:[Oracle Cluster File System]This chapter +outlines the steps necessary to set up a DRBD resource as a block +device holding a shared Oracle Cluster File System, version 2 (OCFS2). + + +[IMPORTANT] +=============================== +All cluster file systems _require_ fencing - not only via the DRBD +resource, but STONITH! A faulty member _must_ be killed. + +You'll want these settings: + + disk { + fencing resource-and-stonith; + } + handlers { + # Make sure the other node is confirmed + # dead after this! + fence-peer "/sbin/kill-other-node.sh"; + } + +There must be _no_ volatile caches! +You might take a few hints of the page at https://fedorahosted.org/cluster/wiki/DRBD_Cookbook, +although that's about GFS2, not OCFS2. +=============================== + + + +[[s-ocfs2-primer]] +=== OCFS2 primer + +The Oracle Cluster File System, version 2 (OCFS2) is a concurrent +access shared storage file system developed by Oracle +Corporation. Unlike its predecessor OCFS, which was specifically +designed and only suitable for Oracle database payloads, OCFS2 is a +general-purpose filesystem that implements most POSIX semantics. The +most common use case for OCFS2 is arguably Oracle Real Application +Cluster (RAC), but OCFS2 may also be used for load-balanced NFS +clusters, for example. + +Although originally designed for use with conventional shared storage +devices, OCFS2 is equally well suited to be deployed on +<>. Applications reading from +the filesystem may benefit from reduced read latency due to the fact +that DRBD reads from and writes to local storage, as opposed to the +SAN devices OCFS2 otherwise normally runs on. In addition, DRBD adds +redundancy to OCFS2 by adding an additional copy to every filesystem +image, as opposed to just a single filesystem image that is merely +shared. + +Like other shared cluster file systems such as <>, OCFS2 +allows multiple nodes to access the same storage device, in read/write +mode, simultaneously without risking data corruption. It does so by +using a Distributed Lock Manager (DLM) which manages concurrent access +from cluster nodes. The DLM itself uses a virtual file system +(`ocfs2_dlmfs`) which is separate from the actual OCFS2 file systems +present on the system. + +OCFS2 may either use an intrinsic cluster communication layer to +manage cluster membership and filesystem mount and unmount operation, +or alternatively defer those tasks to the +<>cluster infrastructure. + +OCFS2 is available in SUSE Linux Enterprise Server (where it is the +primarily supported shared cluster file system), CentOS, Debian +GNU/Linux, and Ubuntu Server Edition. Oracle also provides packages +for Red Hat Enterprise Linux (RHEL). This chapter assumes running +OCFS2 on a SUSE Linux Enterprise Server system. + +[[s-ocfs2-create-resource]] +=== Creating a DRBD resource suitable for OCFS2 + +Since OCFS2 is a shared cluster file system expecting concurrent +read/write storage access from all cluster nodes, any DRBD resource to +be used for storing a OCFS2 filesystem must be configured in +<>. Also, it is recommended to +use some of DRBD's +<>. And, it is necessary for the +resource to switch to the primary role immediately after startup. To +do all this, include the following lines in the resource +configuration: indexterm:[drbd.conf] + +[source,drbd] +---------------------------- +resource { + startup { + become-primary-on both; + ... + } + net { + # allow-two-primaries yes; + after-sb-0pri discard-zero-changes; + after-sb-1pri discard-secondary; + after-sb-2pri disconnect; + ... + } + ... +} +---------------------------- + +[WARNING] +=============================== +By setting auto-recovery policies, you are effectively configuring automatic data-loss! Be sure you understand the implications. +=============================== + + +It is not recommended to set the `allow-two-primaries` option to `yes` +upon initial configuration. You should do so after the initial +resource synchronization has completed. + +Once you have added these options to <>, you may <>. After you set the +indexterm:[drbd.conf]`allow-two-primaries` option to `yes` for this +resource, you will be able to <>to the primary role on both nodes. + +[[s-ocfs2-create]] +=== Creating an OCFS2 filesystem + +Now, use OCFS2's `mkfs` implementation to create the file system: + +---------------------------- +mkfs -t ocfs2 -N 2 -L ocfs2_drbd0 /dev/drbd0 +mkfs.ocfs2 1.4.0 +Filesystem label=ocfs2_drbd0 +Block size=1024 (bits=10) +Cluster size=4096 (bits=12) +Volume size=205586432 (50192 clusters) (200768 blocks) +7 cluster groups (tail covers 4112 clusters, rest cover 7680 clusters) +Journal size=4194304 +Initial number of node slots: 2 +Creating bitmaps: done +Initializing superblock: done +Writing system files: done +Writing superblock: done +Writing backup superblock: 0 block(s) +Formatting Journals: done +Writing lost+found: done +mkfs.ocfs2 successful +---------------------------- + +This will create an OCFS2 file system with two node slots on +`/dev/drbd0`, and set the filesystem label to `ocfs2_drbd0`. You may +specify other options on `mkfs` invocation; please see the `mkfs.ocfs2` +system manual page for details. + +[[s-ocfs2-pacemaker]] +=== Pacemaker OCFS2 management + +[[s-ocfs2-pacemaker-drbd]] +==== Adding a Dual-Primary DRBD resource to Pacemaker + +An existing <>may +be added to Pacemaker resource management with the following +`crm` configuration: + +[source,drbd] +---------------------------- +primitive p_drbd_ocfs2 ocf:linbit:drbd \ + params drbd_resource="ocfs2" +ms ms_drbd_ocfs2 p_drbd_ocfs2 \ + meta master-max=2 clone-max=2 notify=true +---------------------------- + +IMPORTANT: Note the `master-max=2` meta variable; it enables +dual-Master mode for a Pacemaker master/slave set. This requires that +`allow-two-primaries` is also set to `yes` in the DRBD +configuration. Otherwise, Pacemaker will flag a configuration error +during resource validation. + +[[s-ocfs2-pacemaker-mgmtdaemons]] +==== Adding OCFS2 management capability to Pacemaker + +In order to manage OCFS2 and the kernel Distributed Lock Manager +(DLM), Pacemaker uses a total of three different resource agents: + +* `ocf:pacemaker:controld` -- Pacemaker's interface to the DLM; + +* `ocf:ocfs2:o2cb` -- Pacemaker's interface to OCFS2 cluster + management; + +* `ocf:heartbeat:Filesystem` -- the generic filesystem management + resource agent which supports cluster file systems when configured + as a Pacemaker clone. + +You may enable all nodes in a Pacemaker cluster for OCFS2 management +by creating a _cloned group_ of resources, with the following +`crm` configuration: + +[source,drbd] +---------------------------- +primitive p_controld ocf:pacemaker:controld +primitive p_o2cb ocf:ocfs2:o2cb +group g_ocfs2mgmt p_controld p_o2cb +clone cl_ocfs2mgmt g_ocfs2mgmt meta interleave=true +---------------------------- + +Once this configuration is committed, Pacemaker will start instances +of the `controld` and `o2cb` resource types on all nodes in the cluster. + +[[s-ocfs2-pacemaker-fs]] +==== Adding an OCFS2 filesystem to Pacemaker + +Pacemaker manages OCFS2 filesystems using the conventional +`ocf:heartbeat:Filesystem` resource agent, albeit in clone mode. To +put an OCFS2 filesystem under Pacemaker management, use the following +`crm` configuration: + +[source,drbd] +---------------------------- +primitive p_fs_ocfs2 ocf:heartbeat:Filesystem \ + params device="/dev/drbd/by-res/ocfs2/0" directory="/srv/ocfs2" \ + fstype="ocfs2" options="rw,noatime" +clone cl_fs_ocfs2 p_fs_ocfs2 +---------------------------- + +NOTE: This example assumes a single-volume resource. + +[[s-ocfs2-pacemaker-constraints]] +==== Adding required Pacemaker constraints to manage OCFS2 filesystems + +In order to tie all OCFS2-related resources and clones together, add +the following constraints to your Pacemaker configuration: + +[source,drbd] +---------------------------- +order o_ocfs2 inf: ms_drbd_ocfs2:promote cl_ocfs2mgmt:start cl_fs_ocfs2:start +colocation c_ocfs2 inf: cl_fs_ocfs2 cl_ocfs2mgmt ms_drbd_ocfs2:Master +---------------------------- + +[[s-ocfs2-legacy]] +=== Legacy OCFS2 management (without Pacemaker) + +IMPORTANT: The information presented in this section applies to legacy +systems where OCFS2 DLM support is not available in Pacemaker. It is +preserved here for reference purposes only. New installations should +always use the <> approach. + +[[s-ocfs2-enable]] +==== Configuring your cluster to support OCFS2 + +[[s-ocfs2-create-cluster-conf]] +===== Creating the configuration file + +OCFS2 uses a central configuration file, `/etc/ocfs2/cluster.conf`. + +When creating your OCFS2 cluster, be sure to add both your hosts to +the cluster configuration. The default port (7777) is usually an +acceptable choice for cluster interconnect communications. If you +choose any other port number, be sure to choose one that does not +clash with an existing port used by DRBD (or any other configured +TCP/IP). + +If you feel less than comfortable editing the `cluster.conf` file +directly, you may also use the `ocfs2console` graphical configuration +utility which is usually more convenient. Regardless of the approach +you selected, your `/etc/ocfs2/cluster.conf` file contents should look +roughly like this: + +[source,drbd] +---------------------------- +node: + ip_port = 7777 + ip_address = 10.1.1.31 + number = 0 + name = alice + cluster = ocfs2 + +node: + ip_port = 7777 + ip_address = 10.1.1.32 + number = 1 + name = bob + cluster = ocfs2 + +cluster: + node_count = 2 + name = ocfs2 +---------------------------- + + +When you have configured you cluster configuration, use `scp` to +distribute the configuration to both nodes in the cluster. + +[[s-configure-o2cb-driver]] +===== Configuring the O2CB driver + +[[s-suse_linux_enterprise_systems]] +====== SUSE Linux Enterprise systems + +On SLES, you may utilize the `configure` option of the `o2cb` init +script: + +---------------------------- +/etc/init.d/o2cb configure +Configuring the O2CB driver. + +This will configure the on-boot properties of the O2CB driver. +The following questions will determine whether the driver is loaded on +boot. The current values will be shown in brackets ('[]'). Hitting + without typing an answer will keep that current value. Ctrl-C +will abort. + +Load O2CB driver on boot (y/n) [y]: +Cluster to start on boot (Enter "none" to clear) [ocfs2]: +Specify heartbeat dead threshold (>=7) [31]: +Specify network idle timeout in ms (>=5000) [30000]: +Specify network keepalive delay in ms (>=1000) [2000]: +Specify network reconnect delay in ms (>=2000) [2000]: +Use user-space driven heartbeat? (y/n) [n]: +Writing O2CB configuration: OK +Loading module "configfs": OK +Mounting configfs filesystem at /sys/kernel/config: OK +Loading module "ocfs2_nodemanager": OK +Loading module "ocfs2_dlm": OK +Loading module "ocfs2_dlmfs": OK +Mounting ocfs2_dlmfs filesystem at /dlm: OK +Starting O2CB cluster ocfs2: OK +---------------------------- + +[[s-_debian_gnu_linux_systems]] +====== .Debian GNU/Linux systems + +On Debian, the `configure` option to `/etc/init.d/o2cb` is not +available. Instead, reconfigure the `ocfs2-tools` package to enable the +driver: + +---------------------------- +dpkg-reconfigure -p medium -f readline ocfs2-tools +Configuring ocfs2-tools +Would you like to start an OCFS2 cluster (O2CB) at boot time? yes +Name of the cluster to start at boot time: ocfs2 +The O2CB heartbeat threshold sets up the maximum time in seconds that a node +awaits for an I/O operation. After it, the node "fences" itself, and you will +probably see a crash. + +It is calculated as the result of: (threshold - 1) x 2. + +Its default value is 31 (60 seconds). + +Raise it if you have slow disks and/or crashes with kernel messages like: + +o2hb_write_timeout: 164 ERROR: heartbeat write timeout to device XXXX after NNNN +milliseconds +O2CB Heartbeat threshold: `31` + Loading filesystem "configfs": OK +Mounting configfs filesystem at /sys/kernel/config: OK +Loading stack plugin "o2cb": OK +Loading filesystem "ocfs2_dlmfs": OK +Mounting ocfs2_dlmfs filesystem at /dlm: OK +Setting cluster stack "o2cb": OK +Starting O2CB cluster ocfs2: OK +---------------------------- + +[[s-ocfs2-use]] +==== Using your OCFS2 filesystem + +When you have completed cluster configuration and created your file +system, you may mount it as any other file system: +---------------------------- +mount -t ocfs2 /dev/drbd0 /shared +---------------------------- + +Your kernel log (accessible by issuing the command `dmesg`) should +then contain a line similar to this one: + +[source,drbd] +---------------------------- +ocfs2: Mounting device (147,0) on (node 0, slot 0) with ordered data mode. +---------------------------- + +From that point forward, you should be able to simultaneously mount +your OCFS2 filesystem on both your nodes, in read/write mode. diff -Nru drbd-doc-8.4~20151102/UG8.4/en/pacemaker.adoc drbd-doc-8.4~20220106/UG8.4/en/pacemaker.adoc --- drbd-doc-8.4~20151102/UG8.4/en/pacemaker.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/pacemaker.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,668 @@ +[[ch-pacemaker]] +== Integrating DRBD with Pacemaker clusters + +indexterm:[Pacemaker]Using DRBD in conjunction with the Pacemaker +cluster stack is arguably DRBD's most frequently found use +case. Pacemaker is also one of the applications that make DRBD +extremely powerful in a wide variety of usage scenarios. + +[[s-pacemaker-primer]] +=== Pacemaker primer + +Pacemaker is a sophisticated, feature-rich, and widely deployed +cluster resource manager for the Linux platform. It comes with a rich +set of documentation. In order to understand this chapter, reading the +following documents is highly recommended: + +* http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf[Clusters + From Scratch], a step-by-step guide to configuring high availability + clusters; +* http://crmsh.github.io/documentation/index.html[CRM CLI (command line + interface) tool], a manual for the CRM shell, a simple and intuitive + command line interface bundled with Pacemaker; +* http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/s-intro-pacemaker.html[Pacemaker + Configuration Explained], a reference document explaining the + concept and design behind Pacemaker. + + +[[s-pacemaker-crm-drbd-backed-service]] +=== Adding a DRBD-backed service to the cluster configuration + +This section explains how to enable a DRBD-backed service in a +Pacemaker cluster. + +NOTE: If you are employing the DRBD OCF resource agent, it is +recommended that you defer DRBD startup, shutdown, promotion, and +demotion _exclusively_ to the OCF resource agent. That means that you +should disable the DRBD init script: + +---------------------------- +chkconfig drbd off +---------------------------- + +The `ocf:linbit:drbd` OCF resource agent provides Master/Slave +capability, allowing Pacemaker to start and monitor the DRBD resource +on multiple nodes and promoting and demoting as needed. You must, +however, understand that the `drbd` RA disconnects and detaches all +DRBD resources it manages on Pacemaker shutdown, and also upon +enabling standby mode for a node. + + +IMPORTANT: The OCF resource agent which ships with DRBD belongs to the +`linbit` provider, and hence installs as +`/usr/lib/ocf/resource.d/linbit/drbd`. There is a legacy resource +agent that ships as part of the OCF resource agents package, which +uses the `heartbeat` provider and installs into +`/usr/lib/ocf/resource.d/heartbeat/drbd`. The legacy OCF RA is +deprecated and should no longer be used. + +In order to enable a DRBD-backed configuration for a MySQL database in +a Pacemaker CRM cluster with the `drbd` OCF resource agent, you must +create both the necessary resources, and Pacemaker constraints to +ensure your service only starts on a previously promoted DRBD +resource. You may do so using the `crm` shell, as outlined in the +following example: + +.Pacemaker configuration for DRBD-backed MySQL service +---------------------------- +crm configure +crm(live)configure# primitive drbd_mysql ocf:linbit:drbd \ + params drbd_resource="mysql" \ + op monitor interval="29s" role="Master" \ + op monitor interval="31s" role="Slave" +crm(live)configure# ms ms_drbd_mysql drbd_mysql \ + meta master-max="1" master-node-max="1" \ + clone-max="2" clone-node-max="1" \ + notify="true" +crm(live)configure# primitive fs_mysql ocf:heartbeat:Filesystem \ + params device="/dev/drbd/by-res/mysql" \ + directory="/var/lib/mysql" fstype="ext3" +crm(live)configure# primitive ip_mysql ocf:heartbeat:IPaddr2 \ + params ip="10.9.42.1" nic="eth0" +crm(live)configure# primitive mysqld lsb:mysqld +crm(live)configure# group mysql fs_mysql ip_mysql mysqld +crm(live)configure# colocation mysql_on_drbd \ + inf: mysql ms_drbd_mysql:Master +crm(live)configure# order mysql_after_drbd \ + inf: ms_drbd_mysql:promote mysql:start +crm(live)configure# commit +crm(live)configure# exit +bye +---------------------------- + +After this, your configuration should be enabled. Pacemaker now +selects a node on which it promotes the DRBD resource, and then starts +the DRBD-backed resource group on that same node. + +[[s-pacemaker-fencing]] +=== Using resource-level fencing in Pacemaker clusters + +This section outlines the steps necessary to prevent Pacemaker from +promoting a `drbd` Master/Slave resource when its DRBD replication link +has been interrupted. This keeps Pacemaker from starting a service +with outdated data and causing an unwanted "time warp" in the +process. + +In order to enable any resource-level fencing for DRBD, you must add +the following lines to your resource configuration: + +[source,drbd] +---------------------------- +resource { + disk { + fencing resource-only; + ... + } +} +---------------------------- + +You will also have to make changes to the `handlers` section depending +on the cluster infrastructure being used: + +* Heartbeat-based Pacemaker clusters can employ the configuration + outlined in <>. +* Both Corosync- and Heartbeat-based clusters can use the + functionality explained in <>. + +IMPORTANT: It is absolutely vital to configure at least two +independent cluster communications channels for this functionality to +work correctly. Heartbeat-based Pacemaker clusters should define at +least two cluster communication links in their `ha.cf` configuration +files. Corosync clusters should list at least two redundant rings in +`corosync.conf`. + +[[s-pacemaker-fencing-dopd]] +==== Resource-level fencing with `dopd` + +indexterm:[dopd]In Heartbeat-based Pacemaker clusters, DRBD can +use a resources-level fencing facility named the _DRBD outdate-peer +daemon_, or `dopd` for short. + + +[[s-dopd-heartbeat-config]] +===== Heartbeat configuration for `dopd` + +To enable dopd, you must add these lines to your indexterm:[ha.cf +(Heartbeat configuration file)]`/etc/ha.d/ha.cf` file: + +[source,drbd] +---------------------------- +respawn hacluster /usr/lib/heartbeat/dopd +apiauth dopd gid=haclient uid=hacluster +---------------------------- + +You may have to adjust ``dopd``'s path according to your preferred +distribution. On some distributions and architectures, the correct +path is `/usr/lib64/heartbeat/dopd`. + +After you have made this change and copied `ha.cf` to the peer node, +put Pacemaker in maintenance mode and run `/etc/init.d/heartbeat +reload` to have Heartbeat re-read its configuration file. Afterwards, +you should be able to verify that you now have a running `dopd` +process. + +NOTE: You can check for this process either by running `ps ax | grep +dopd` or by issuing `killall -0 dopd`. + + +[[s-dopd-drbd-config]] +===== DRBD Configuration for `dopd` + +Once `dopd` is running, add these items to your DRBD resource +configuration: + +[source,drbd] +---------------------------- +resource { + handlers { + fence-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5"; + ... + } + disk { + fencing resource-only; + ... + } + ... +} +---------------------------- + +As with `dopd`, your distribution may place the `drbd-peer-outdater` +binary in `/usr/lib64/heartbeat` depending on your system +architecture. + +Finally, copy your `drbd.conf` to the peer node and issue `drbdadm +adjust resource` to reconfigure your resource and reflect your +changes. + +[[s-dopd-test]] +===== Testing `dopd` functionality + +To test whether your `dopd` setup is working correctly, interrupt the +replication link of a configured and connected resource while +Heartbeat services are running normally. You may do so simply by +physically unplugging the network link, but that is fairly +invasive. Instead, you may insert a temporary `iptables` rule to drop +incoming DRBD traffic to the TCP port used by your resource. + +After this, you will be able to observe the resource +<> change from +indexterm:[connection state]indexterm:[Connected (connection state)] +_Connected_ to indexterm:[connection state]indexterm:[WFConnection +(connection state)]_WFConnection_. Allow a few seconds to pass, and +you should see the <>become indexterm:[disk +state]indexterm:[Outdated (disk state)]__Outdated__/__DUnknown__. That is +what `dopd` is responsible for. + +Any attempt to switch the outdated resource to the primary role will +fail after this. + +When re-instituting network connectivity (either by plugging the +physical link or by removing the temporary `iptables` rule you inserted +previously), the connection state will change to _Connected_, and then +promptly to _SyncTarget_ (assuming changes occurred on the primary node +during the network interruption). Then you will be able to observe a +brief synchronization period, and finally, the previously outdated +resource will be marked as indexterm:[disk state]indexterm:[UpToDate +(disk state)]_UpToDate_ again. + + +[[s-pacemaker-fencing-cib]] +==== Resource-level fencing using the Cluster Information Base (CIB) + +In order to enable resource-level fencing for Pacemaker, you will have +to set two options in `drbd.conf`: + +[source,drbd] +---------------------------- +resource { + disk { + fencing resource-only; + ... + } + handlers { + fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; + after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; + ... + } + ... +} +---------------------------- + +Thus, if the DRBD replication link becomes disconnected, the +`crm-fence-peer.sh` script contacts the cluster manager, determines the +Pacemaker Master/Slave resource associated with this DRBD resource, +and ensures that the Master/Slave resource no longer gets promoted on +any node other than the currently active one. Conversely, when the +connection is re-established and DRBD completes its synchronization +process, then that constraint is removed and the cluster manager is +free to promote the resource on any node again. + +[[s-pacemaker-stacked-resources]] +=== Using stacked DRBD resources in Pacemaker clusters + +Stacked resources allow DRBD to be used for multi-level redundancy in +multiple-node clusters, or to establish off-site disaster recovery +capability. This section describes how to configure DRBD and Pacemaker +in such configurations. + +[[s-pacemaker-stacked-dr]] +==== Adding off-site disaster recovery to Pacemaker clusters + +In this configuration scenario, we would deal with a two-node high +availability cluster in one site, plus a separate node which would +presumably be housed off-site. The third node acts as a disaster +recovery node and is a standalone server. Consider the following +illustration to describe the concept. + +.DRBD resource stacking in Pacemaker clusters +image::images/drbd-resource-stacking-pacemaker-3nodes.svg[] + +In this example, 'alice' and 'bob' form a two-node Pacemaker cluster, +whereas 'charlie' is an off-site node not managed by Pacemaker. + +To create such a configuration, you would first configure and +initialize DRBD resources as described in <>. Then, +configure Pacemaker with the following CRM configuration: + +[source,drbd] +---------------------------- +primitive p_drbd_r0 ocf:linbit:drbd \ + params drbd_resource="r0" + +primitive p_drbd_r0-U ocf:linbit:drbd \ + params drbd_resource="r0-U" + +primitive p_ip_stacked ocf:heartbeat:IPaddr2 \ + params ip="192.168.42.1" nic="eth0" + +ms ms_drbd_r0 p_drbd_r0 \ + meta master-max="1" master-node-max="1" \ + clone-max="2" clone-node-max="1" \ + notify="true" globally-unique="false" + +ms ms_drbd_r0-U p_drbd_r0-U \ + meta master-max="1" clone-max="1" \ + clone-node-max="1" master-node-max="1" \ + notify="true" globally-unique="false" + +colocation c_drbd_r0-U_on_drbd_r0 \ + inf: ms_drbd_r0-U ms_drbd_r0:Master + +colocation c_drbd_r0-U_on_ip \ + inf: ms_drbd_r0-U p_ip_stacked + +colocation c_ip_on_r0_master \ + inf: p_ip_stacked ms_drbd_r0:Master + +order o_ip_before_r0-U \ + inf: p_ip_stacked ms_drbd_r0-U:start + +order o_drbd_r0_before_r0-U \ + inf: ms_drbd_r0:promote ms_drbd_r0-U:start +---------------------------- + +Assuming you created this configuration in a temporary file named +`/tmp/crm.txt`, you may import it into the live cluster configuration +with the following command: + +---------------------------- +crm configure < /tmp/crm.txt +---------------------------- + +This configuration will ensure that the following actions occur in the +correct order on the 'alice' and 'bob' cluster: + +. Pacemaker starts the DRBD resource `r0` on both cluster nodes, and + promotes one node to the Master (DRBD Primary) role. + +. Pacemaker then starts the IP address 192.168.42.1, which the stacked + resource is to use for replication to the third node. It does so on + the node it has previously promoted to the Master role for `r0` DRBD + resource. + +. On the node which now has the Primary role for `r0` and also the + replication IP address for `r0-U`, Pacemaker now starts the + `r0-U` DRBD resource, which connects and replicates to the off-site + node. + +. Pacemaker then promotes the `r0-U` resource to the Primary role too, + so it can be used by an application. + +Thus, this Pacemaker configuration ensures that there is not only full +data redundancy between cluster nodes, but also to the third, off-site +node. + +NOTE: This type of setup is usually deployed together with +<>. + +[[s-pacemaker-stacked-4way]] +==== Using stacked resources to achieve 4-way redundancy in Pacemaker clusters + +In this configuration, a total of three DRBD resources (two unstacked, +one stacked) are used to achieve 4-way storage redundancy. This means +that of a 4-node cluster, up to three nodes can fail while still +providing service availability. + +Consider the following illustration to explain the concept. + +.DRBD resource stacking in Pacemaker clusters +image::images/drbd-resource-stacking-pacemaker-4nodes.svg[] + +In this example, 'alice', 'bob', 'charlie', and 'daisy' form two +two-node Pacemaker clusters. 'alice' and 'bob' form the cluster named +'left' and replicate data using a DRBD resource between them, while +'charlie' and 'daisy' do the same with a separate DRBD resource, in a +cluster named 'right'. A third, stacked DRBD resource connects the two +clusters. + +NOTE: Due to limitations in the Pacemaker cluster manager as of +Pacemaker version 1.0.5, it is not possible to create this setup in a +single four-node cluster without disabling CIB validation, which is an +advanced process not recommended for general-purpose use. It is +anticipated that this is being addressed in future Pacemaker releases. + +To create such a configuration, you would first configure and +initialize DRBD resources as described in <> (except +that the remote half of the DRBD configuration is also stacked, not +just the local cluster). Then, configure Pacemaker with the following +CRM configuration, starting with the cluster 'left': + +[source,drbd] +---------------------------- +primitive p_drbd_left ocf:linbit:drbd \ + params drbd_resource="left" + +primitive p_drbd_stacked ocf:linbit:drbd \ + params drbd_resource="stacked" + +primitive p_ip_stacked_left ocf:heartbeat:IPaddr2 \ + params ip="10.9.9.100" nic="eth0" + +ms ms_drbd_left p_drbd_left \ + meta master-max="1" master-node-max="1" \ + clone-max="2" clone-node-max="1" \ + notify="true" + +ms ms_drbd_stacked p_drbd_stacked \ + meta master-max="1" clone-max="1" \ + clone-node-max="1" master-node-max="1" \ + notify="true" target-role="Master" + +colocation c_ip_on_left_master \ + inf: p_ip_stacked_left ms_drbd_left:Master + +colocation c_drbd_stacked_on_ip_left \ + inf: ms_drbd_stacked p_ip_stacked_left + +order o_ip_before_stacked_left \ + inf: p_ip_stacked_left ms_drbd_stacked:start + +order o_drbd_left_before_stacked_left \ + inf: ms_drbd_left:promote ms_drbd_stacked:start + +---------------------------- + +Assuming you created this configuration in a temporary file named +`/tmp/crm.txt`, you may import it into the live cluster configuration +with the following command: + +---------------------------- +crm configure < /tmp/crm.txt +---------------------------- + +After adding this configuration to the CIB, Pacemaker will execute the +following actions: + +. Bring up the DRBD resource 'left' replicating between 'alice' and + 'bob' promoting the resource to the Master role on one of these nodes. + +. Bring up the IP address 10.9.9.100 (on either 'alice' or 'bob', + depending on which of these holds the Master role for the resource + 'left'). + +. Bring up the DRBD resource `stacked` on the same node that holds the + just-configured IP address. + +. Promote the stacked DRBD resource to the Primary role. + +Now, proceed on the cluster 'right' by creating the following +configuration: + +[source,drbd] +---------------------------- +primitive p_drbd_right ocf:linbit:drbd \ + params drbd_resource="right" + +primitive p_drbd_stacked ocf:linbit:drbd \ + params drbd_resource="stacked" + +primitive p_ip_stacked_right ocf:heartbeat:IPaddr2 \ + params ip="10.9.10.101" nic="eth0" + +ms ms_drbd_right p_drbd_right \ + meta master-max="1" master-node-max="1" \ + clone-max="2" clone-node-max="1" \ + notify="true" + +ms ms_drbd_stacked p_drbd_stacked \ + meta master-max="1" clone-max="1" \ + clone-node-max="1" master-node-max="1" \ + notify="true" target-role="Slave" + +colocation c_drbd_stacked_on_ip_right \ + inf: ms_drbd_stacked p_ip_stacked_right + +colocation c_ip_on_right_master \ + inf: p_ip_stacked_right ms_drbd_right:Master + +order o_ip_before_stacked_right \ + inf: p_ip_stacked_right ms_drbd_stacked:start + +order o_drbd_right_before_stacked_right \ + inf: ms_drbd_right:promote ms_drbd_stacked:start +---------------------------- + +After adding this configuration to the CIB, Pacemaker will execute the +following actions: + +. Bring up the DRBD resource 'right' replicating between 'charlie' and + 'daisy', promoting the resource to the Master role on one of these + nodes. + +. Bring up the IP address 10.9.10.101 (on either 'charlie' or 'daisy', + depending on which of these holds the Master role for the resource + 'right'). + +. Bring up the DRBD resource `stacked` on the same node that holds the + just-configured IP address. + +. Leave the stacked DRBD resource in the Secondary role (due to + `target-role="Slave"`). + +[[s-pacemaker-floating-peers]] +=== Configuring DRBD to replicate between two SAN-backed Pacemaker clusters + +This is a somewhat advanced setup usually employed in split-site +configurations. It involves two separate Pacemaker clusters, where +each cluster has access to a separate Storage Area Network (SAN). DRBD +is then used to replicate data stored on that SAN, across an IP link +between sites. + +Consider the following illustration to describe the concept. + +.Using DRBD to replicate between SAN-based clusters +image::images/drbd-pacemaker-floating-peers.svg[] + +Which of the individual nodes in each site currently acts as the DRBD +peer is not explicitly defined -- the DRBD peers +<>; that is, DRBD binds to +virtual IP addresses not tied to a specific physical machine. + + +NOTE: This type of setup is usually deployed together with +<>and/or <>. + +Since this type of setup deals with shared storage, configuring and +testing STONITH is absolutely vital for it to work properly. + +[[s-pacemaker-floating-peers-drbd-config]] +==== DRBD resource configuration + +To enable your DRBD resource to float, configure it in `drbd.conf` in +the following fashion: + +[source,drbd] +---------------------------- +resource { + ... + device /dev/drbd0; + disk /dev/sda1; + meta-disk internal; + floating 10.9.9.100:7788; + floating 10.9.10.101:7788; +} +---------------------------- + +The `floating` keyword replaces the `on ` sections normally +found in the resource configuration. In this mode, DRBD identifies +peers by IP address and TCP port, rather than by host name. It is +important to note that the addresses specified must be virtual cluster +IP addresses, rather than physical node IP addresses, for floating to +function properly. As shown in the example, in split-site +configurations the two floating addresses can be expected to belong to +two separate IP networks -- it is thus vital for routers and firewalls +to properly allow DRBD replication traffic between the nodes. + +[[s-pacemaker-floating-peers-crm-config]] +==== Pacemaker resource configuration + +A DRBD floating peers setup, in terms of Pacemaker configuration, +involves the following items (in each of the two Pacemaker clusters +involved): + +* A virtual cluster IP address. + +* A master/slave DRBD resource (using the DRBD OCF resource agent). + +* Pacemaker constraints ensuring that resources are started on the + correct nodes, and in the correct order. + +To configure a resource named `mysql` in a floating peers +configuration in a 2-node cluster, using the replication address +`10.9.9.100`, configure Pacemaker with the following `crm` commands: + +---------------------------- +crm configure +crm(live)configure# primitive p_ip_float_left ocf:heartbeat:IPaddr2 \ + params ip=10.9.9.100 +crm(live)configure# primitive p_drbd_mysql ocf:linbit:drbd \ + params drbd_resource=mysql +crm(live)configure# ms ms_drbd_mysql drbd_mysql \ + meta master-max="1" master-node-max="1" \ + clone-max="1" clone-node-max="1" \ + notify="true" target-role="Master" +crm(live)configure# order drbd_after_left \ + inf: p_ip_float_left ms_drbd_mysql +crm(live)configure# colocation drbd_on_left \ + inf: ms_drbd_mysql p_ip_float_left +crm(live)configure# commit +bye +---------------------------- + +After adding this configuration to the CIB, Pacemaker will execute the +following actions: + +. Bring up the IP address 10.9.9.100 (on either 'alice' or 'bob'). +. Bring up the DRBD resource according to the IP address configured. +. Promote the DRBD resource to the Primary role. + +Then, in order to create the matching configuration in the other +cluster, configure _that_ Pacemaker instance with the following +commands: + +---------------------------- +crm configure +crm(live)configure# primitive p_ip_float_right ocf:heartbeat:IPaddr2 \ + params ip=10.9.10.101 +crm(live)configure# primitive drbd_mysql ocf:linbit:drbd \ + params drbd_resource=mysql +crm(live)configure# ms ms_drbd_mysql drbd_mysql \ + meta master-max="1" master-node-max="1" \ + clone-max="1" clone-node-max="1" \ + notify="true" target-role="Slave" +crm(live)configure# order drbd_after_right \ + inf: p_ip_float_right ms_drbd_mysql +crm(live)configure# colocation drbd_on_right + inf: ms_drbd_mysql p_ip_float_right +crm(live)configure# commit +bye +---------------------------- + +After adding this configuration to the CIB, Pacemaker will execute the +following actions: + +. Bring up the IP address 10.9.10.101 (on either 'charlie' or + 'daisy'). +. Bring up the DRBD resource according to the IP address configured. +. Leave the DRBD resource in the Secondary role (due to + `target-role="Slave"`). + +[[s-pacemaker-floating-peers-site-fail-over]] +==== Site fail-over + +In split-site configurations, it may be necessary to transfer services +from one site to another. This may be a consequence of a scheduled +transition, or of a disastrous event. In case the transition is a +normal, anticipated event, the recommended course of action is this: + +* Connect to the cluster on the site about to relinquish resources, + and change the affected DRBD resource's `target-role` attribute from + _Master_ to _Slave_. This will shut down any resources depending on + the Primary role of the DRBD resource, demote it, and continue to + run, ready to receive updates from a new Primary. + +* Connect to the cluster on the site about to take over resources, and + change the affected DRBD resource's `target-role` attribute from + _Slave_ to _Master_. This will promote the DRBD resources, start any + other Pacemaker resources depending on the Primary role of the DRBD + resource, and replicate updates to the remote site. + +* To fail back, simply reverse the procedure. + +In the event that of a catastrophic outage on the active site, it can +be expected that the site is off line and no longer replicated to the +backup site. In such an event: + +* Connect to the cluster on the still-functioning site resources, and + change the affected DRBD resource's `target-role` attribute from + _Slave_ to _Master_. This will promote the DRBD resources, and start + any other Pacemaker resources depending on the Primary role of the + DRBD resource. + +* When the original site is restored or rebuilt, you may connect the + DRBD resources again, and subsequently fail back using the reverse + procedure. + +// Keep the empty line before this comment, otherwise the next chapter is folded into this + diff -Nru drbd-doc-8.4~20151102/UG8.4/en/recent-changes.adoc drbd-doc-8.4~20220106/UG8.4/en/recent-changes.adoc --- drbd-doc-8.4~20151102/UG8.4/en/recent-changes.adoc 1970-01-01 00:00:00.000000000 +0000 +++ drbd-doc-8.4~20220106/UG8.4/en/recent-changes.adoc 2022-01-31 09:40:31.000000000 +0000 @@ -0,0 +1,382 @@ +[[ap-recent-changes]] +[appendix] +== Recent changes + +This appendix is for users who upgrade from earlier DRBD versions to +DRBD 8.4. It highlights some important changes to DRBD's configuration +and behavior. + +[[s-recent-changes-volumes]] +=== Volumes + +Volumes are a new concept in DRBD 8.4. Prior to 8.4, every resource +had only one block device associated with it, thus there was a +one-to-one relationship between DRBD devices and resources. Since 8.4, +multiple volumes (each corresponding to one block device) may share a +single replication connection, which in turn corresponds to a single +resource. + +[[s-recent-changes-volumes-udev]] +==== Changes to udev symlinks + +The DRBD udev integration scripts manage symlinks pointing to +individual block device nodes. These exist in the `/dev/drbd/by-res` +and `/dev/drbd/by-disk` directories. + +In DRBD 8.3 and earlier, links in `/dev/drbd/by-disk` point to single +block devices: + +.udev managed DRBD symlinks in DRBD 8.3 and earlier +---------------------------- +lrwxrwxrwx 1 root root 11 2011-05-19 11:46 /dev/drbd/by-res/home -> + ../../drbd0 +lrwxrwxrwx 1 root root 11 2011-05-19 11:46 /dev/drbd/by-res/data -> + ../../drbd1 +lrwxrwxrwx 1 root root 11 2011-05-19 11:46 /dev/drbd/by-res/nfs-root -> + ../../drbd2 +---------------------------- + +In DRBD 8.4, since a single resource may correspond to multiple +volumes, `/dev/drbd/by-res/` becomes a _directory_, +containing symlinks pointing to individual volumes: + +.udev managed DRBD symlinks in DRBD 8.4 +---------------------------- +lrwxrwxrwx 1 root root 11 2011-07-04 09:22 /dev/drbd/by-res/home/0 -> + ../../drbd0 +lrwxrwxrwx 1 root root 11 2011-07-04 09:22 /dev/drbd/by-res/data/0 -> + ../../drbd1 +lrwxrwxrwx 1 root root 11 2011-07-04 09:22 /dev/drbd/by-res/nfs-root/0 -> + ../../drbd2 +lrwxrwxrwx 1 root root 11 2011-07-04 09:22 /dev/drbd/by-res/nfs-root/1 -> + ../../drbd3 +---------------------------- + +Configurations where filesystems are referred to by symlink must be +updated when moving to DRBD 8.4, usually by simply appending `/0` to +the symlink path. + +[[s-recent-changes-config]] +=== Changes to the configuration syntax + +This section highlights changes to the configuration syntax. It +affects the DRBD configuration files in `/etc/drbd.d`, and +`/etc/drbd.conf`. + +IMPORTANT: The `drbdadm` parser still accepts pre-8.4 configuration +syntax and automatically translates, internally, into the current +syntax. Unless you are planning to use new features not present in +prior DRBD releases, there is no requirement to modify your +configuration to the current syntax. It is, however, recommended that +you eventually adopt the new syntax, as the old format will no longer +be supported in DRBD 9. + +[[s-recent-changes-config-booleans]] +==== Boolean configuration options + +`drbd.conf` supports a variety of boolean configuration options. In +pre DRBD 8.4 syntax, these boolean options would be set as follows: + +.Pre-DRBD 8.4 configuration example with boolean options +[source,drbd] +---------------------------- +resource test { + disk { + no-md-flushes; + } +} +---------------------------- + +This led to configuration issues if you wanted to set a boolean +variable in the `common` configuration section, and then override it +for individual resources: + +.Pre-DRBD 8.4 configuration example with boolean options in `common` section +[source,drbd] +---------------------------- +common { + no-md-flushes; +} +resource test { + disk { + # No facility to enable disk flushes previously disabled in + # "common" + } +} +---------------------------- + +In DRBD 8.4, all boolean options take a value of `yes` or `no`, making +them easily configurable both from `common` and from individual +`resource` sections: + +.DRBD 8.4 configuration example with boolean options in `common` section +[source,drbd] +---------------------------- +common { + md-flushes no; +} +resource test { + disk { + md-flushes yes; + } +} +---------------------------- + +[[s-recent-changes-config-syncer]] +==== `syncer` section no longer exists + +Prior to DRBD 8.4, the configuration syntax allowed for a `syncer` +section which has become obsolete in 8.4. All previously existing +`syncer` options have now moved into the `net` or `disk` sections of +resources. + +.Pre-DRBD 8.4 configuration example with `syncer` section +[source,drbd] +---------------------------- +resource test { + syncer { + al-extents 3389; + verify-alg md5; + } + ... +} +---------------------------- + +The above example is expressed, in DRBD 8.4 syntax, as follows: + +.DRBD 8.4 configuration example with `syncer` section replaced +[source,drbd] +---------------------------- +resource test { + disk { + al-extents 3389; + } + net { + verify-alg md5; + } + ... +} +---------------------------- + +[[s-recent-changes-config-protocol]] +==== `protocol` option is no longer special + +In prior DRBD releases, the `protocol` option was awkwardly (and +counter-intuitively) required to be specified on its own, rather than +as part of the `net` section. DRBD 8.4 removes this anomaly: + +.Pre-DRBD 8.4 configuration example with standalone `protocol` option +[source,drbd] +---------------------------- +resource test { + protocol C; + ... + net { + ... + } + ... +} +---------------------------- + +The equivalent DRBD 8.4 configuration syntax is: + +.DRBD 8.4 configuration example with `protocol` option within `net` section +[source,drbd] +---------------------------- +resource test { + net { + protocol C; + ... + } + ... +} +---------------------------- + + +[[s-recent-changes-config-options]] +==== New per-resource `options` section + +DRBD 8.4 introduces a new `options` section that may be specified +either in a `resource` or in the `common` section. The `cpu-mask` +option has moved into this section from the `syncer` section in which +it was awkwardly configured before. The `on-no-data-accessible` option +has also moved to this section, rather than being in `disk` where +it had been in pre-8.4 releases. + +.Pre-DRBD 8.4 configuration example with `cpu-mask` and `on-no-data-accessible` +[source,drbd] +---------------------------- +resource test { + syncer { + cpu-mask ff; + } + disk { + on-no-data-accessible suspend-io; + } + ... +} +---------------------------- + +The equivalent DRBD 8.4 configuration syntax is: + +.DRBD 8.4 configuration example with `options` section +[source,drbd] +---------------------------- +resource test { + options { + cpu-mask ff; + on-no-data-accessible suspend-io; + } + ... +} +---------------------------- + + +[[s-recent-changes-net]] +=== On-line changes to network communications + +[[s-recent-changes-change-protocol]] +==== Changing the replication protocol + +Prior to DRBD 8.4, changes to the replication protocol were impossible +while the resource was on-line and active. You would have to change +the `protocol` option in your resource configuration file, then issue +`drbdadm disconnect` and finally `drbdadm connect` on both nodes. + +In DRBD 8.4, the replication protocol can be changed on the fly. You +may, for example, temporarily switch a connection to asynchronous +replication from its normal, synchronous replication mode. + +.Changing replication protocol while connection is established +---------------------------- +drbdadm net-options --protocol=A +---------------------------- + +[[s-recent-changes-switch-dual-primary]] +==== Changing from single-Primary to dual-Primary replication + +Prior to DRBD 8.4, it was impossible to switch between single-Primary +to dual-Primary or back while the resource was on-line and active. You +would have to change the `allow-two-primaries` option in your resource +configuration file, then issue `drbdadm disconnect` and finally +`drbdadm connect` on both nodes. + +In DRBD 8.4, it is possible to switch modes on-line. + +CAUTION: It is _required_ for an application using DRBD dual-Primary +mode to use a clustered file system or some other distributed locking +mechanism. This applies regardless of whether dual-Primary mode is +enabled on a temporary or permanent basis. + +Refer to <> for switching to +dual-Primary mode while the resource is on-line. + + +[[s-recent-changes-drbdadm]] +=== Changes to the `drbdadm` command + +[[s-recent-changes-drbdadm-passthrough-options]] +==== Changes to pass-through options + +Prior to DRBD 8.4, if you wanted `drbdadm` to pass special options through to +`drbdsetup`, you had to use the arcane `--{nbsp}--