swift-get-nodes not working in "full" mitaka

Bug #1659147 reported by cindy b
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
OpenStack Object Storage (swift)
Invalid
Undecided
Unassigned

Bug Description

Testing swift with full installs of mitaka. Swift and openstack commands working. swift-get-nodes returning bogus partition ids/names.

Revision history for this message
cindy b (cindybatt) wrote :

Forgot to mention swift cluster, swift policy is erasure coded.

Revision history for this message
clayg (clay-gerrard) wrote :

are you specifying the EC policy (-P) or EC ring (/etc/swift/object-1.ring.gz?) when using swift-get-nodes? What is the output you see? What is the output you expect?

N.B. swift-get-nodes works as expected for me on EC rings (6cc10d17 added support swift-get-nodes support for storage policies in Swift 2.0) - and it's even more user friendly after https://review.openstack.org/#/c/406012/

Revision history for this message
Alistair Coles (alistair-coles) wrote :

not sure what you mean by "bogus" but note that swift-get-nodes reports theoretical locations of resources that may not actually exist in the cluster.

Revision history for this message
cindy b (cindybatt) wrote :

I am using the ec ring: swift-get-nodes /etc/swift/object.ring.gz swift NEW myfile. The command returns information like before but the partition returned does not exist and it happens every time. And I have rebuilt the rings dozens of times....

[swift-hash]
swift_hash_path_suffix = b007fed409
swift_hash_path_prefix = 3a5d2d5d93
[storage-policy:0]
name = ec5
aliases = Policy-0
default = yes
policy_type = erasure_coding
ec_type = jerasure_rs_vand
ec_num_data_fragments = 4
ec_num_parity_fragments = 1
ec_object_segment_size = 1048576
~

Revision history for this message
cindy b (cindybatt) wrote :

We tested EC earlier using Keystone AUth 2 with Mitaka Swift and the swift-get-nodes command returned partition names (db names) that existed in the....objects directory. Now with Keystone Auth 3 (in Mitaka) the names of the partitions returned (and db names) do not exist anywhere in the objects directory. To be clear are you saying that is to be expected?

Revision history for this message
Alistair Coles (alistair-coles) wrote :

could you post an example oft he output you get from swift-get-nodes, pointing out which content is not as expected?

Here's what I see using the command line you posted:

```
swift@u135:~/swift$ swift-get-nodes /etc/swift/object.ring.gz swift NEW myfile

Account swift
Container NEW
Object myfile

Partition 378
Hash 5e8be18c46a71c3b9110a775275967a9

Server:Port Device 127.0.0.1:6020 sdb2
Server:Port Device 127.0.0.1:6040 sdb4
Server:Port Device 127.0.0.1:6030 sdb3
Server:Port Device 127.0.0.1:6010 sdb1 [Handoff]

curl -g -I -XHEAD "http://127.0.0.1:6020/sdb2/378/swift/NEW/myfile"
curl -g -I -XHEAD "http://127.0.0.1:6040/sdb4/378/swift/NEW/myfile"
curl -g -I -XHEAD "http://127.0.0.1:6030/sdb3/378/swift/NEW/myfile"
curl -g -I -XHEAD "http://127.0.0.1:6010/sdb1/378/swift/NEW/myfile" # [Handoff]

Use your own device location of servers:
such as "export DEVICE=/srv/node"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/sdb2/objects/378/7a9/5e8be18c46a71c3b9110a775275967a9"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/sdb4/objects/378/7a9/5e8be18c46a71c3b9110a775275967a9"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/sdb3/objects/378/7a9/5e8be18c46a71c3b9110a775275967a9"
ssh 127.0.0.1 "ls -lah ${DEVICE:-/srv/node*}/sdb1/objects/378/7a9/5e8be18c46a71c3b9110a775275967a9" # [Handoff]

note: `/srv/node*` is used as default value of `devices`, the real value is set in the config file on each storage node.
```

what that is telling me is that IF I were to PUT and object at path swift/NEW/myfile then I should find the files at ${DEVICE:-/srv/node*}/sdb2/objects/378/7a9/5e8be18c46a71c3b9110a775275967a9 etc. But if the object has not been PUT then those files won't yet exist. Furthermore, the parent dir /sdb2/objects/378 may not yet exist if no object has yet been put that hashed to that particular partition (i.e. 378). On the other hand, the partition dir /sdb2/objects/378 *may* already exist even if swift/NEW/myfile has not been created IF another existing object happened to hash to that partition.

In summary: Partition dirs and their subdirs are created on demand when PUT object paths hash to them. swift-get-nodes returns the predicted file locations whether or not they yet exist on disk.

Revision history for this message
cindy b (cindybatt) wrote :

Here it is.. it outputs partition 78 but there is no 78...

root@hlxkvm037:/opt/swift/data/1/objects# swift-get-nodes /etc/swift/object.ring.gz swift NEWTEST2 deserts.JPG

Account swift
Container NEWTEST2
Object deserts.JPG

Partition 78
Hash 4e131dd79c51feb33071458643e60016

Server:Port Device 135.165.181.200:6000 1
Server:Port Device 135.165.181.198:6000 1
Server:Port Device 135.165.181.201:6000 1
Server:Port Device 135.165.181.199:6000 1
Server:Port Device 135.165.181.202:6000 1

curl -I -XHEAD "http://135.165.181.200:6000/1/78/swift/NEWTEST2/deserts.JPG"
curl -I -XHEAD "http://135.165.181.198:6000/1/78/swift/NEWTEST2/deserts.JPG"
curl -I -XHEAD "http://135.165.181.201:6000/1/78/swift/NEWTEST2/deserts.JPG"
curl -I -XHEAD "http://135.165.181.199:6000/1/78/swift/NEWTEST2/deserts.JPG"
curl -I -XHEAD "http://135.165.181.202:6000/1/78/swift/NEWTEST2/deserts.JPG"

Use your own device location of servers:
such as "export DEVICE=/srv/node"
ssh 135.165.181.200 "ls -lah ${DEVICE:-/srv/node*}/1/objects/78/016/4e131dd79c51feb33071458643e60016"
ssh 135.165.181.198 "ls -lah ${DEVICE:-/srv/node*}/1/objects/78/016/4e131dd79c51feb33071458643e60016"
ssh 135.165.181.201 "ls -lah ${DEVICE:-/srv/node*}/1/objects/78/016/4e131dd79c51feb33071458643e60016"
ssh 135.165.181.199 "ls -lah ${DEVICE:-/srv/node*}/1/objects/78/016/4e131dd79c51feb33071458643e60016"
ssh 135.165.181.202 "ls -lah ${DEVICE:-/srv/node*}/1/objects/78/016/4e131dd79c51feb33071458643e60016"

note: `/srv/node*` is used as default value of `devices`, the real value is set in the config file on each storage node

oot@hlxkvm037:/opt/swift/data/1/objects# ls
10 121 13 138 146 165 17 176 186 19 196 201 204 216 224 239 244 252 27 31 37 4 42 5 59 70 80 90 97
103 126 131 141 154 166 172 182 188 192 2 202 206 218 225 24 247 254 28 33 38 40 43 55 64 71 82 91 98
11 129 136 142 157 168 174 185 189 194 200 203 21 22 227 240 249 26 29 35 39 41 45 56 65 72 84 95

I'm really sorry, I find this confusing sort of..In summary: Partition dirs and their subdirs are created on demand when PUT object paths hash to them. swift-get-nodes returns the predicted file locations whether or not they yet exist on disk.

The object was already uploaded....so it may not hash yet??...and if so, could older, slower storage cause that to occur.

Revision history for this message
clayg (clay-gerrard) wrote :

Just curious? How'd you get an account named just "swift" - they normally look like "AUTH_<uuid> or something?

Do a `swift stat -v <mycontainer> <myfile>` and double check the /a/c/o path to the file?

Revision history for this message
cindy b (cindybatt) wrote :

swift is openstack/keystone account/username. I always use them...

root@hlxkvm037:~# swift stat -v NEWTEST2 deserts.JPG
           URL: http://storage1-st:8080/v1/AUTH_7047c4251ffd42b992a3984b71178a66/NEWTEST2/deserts.JPG
    Auth Token: 988e4ac4bdd6499e9ec293082117de25
       Account: AUTH_7047c4251ffd42b992a3984b71178a66
     Container: NEWTEST2
        Object: deserts.JPG
  Content Type: image/jpeg
Content Length: 76212
 Last Modified: Wed, 25 Jan 2017 18:49:53 GMT
          ETag: 09c03ecb7558438778ca882affc3ecc1
    Meta Mtime: 1485370012.906008
 Accept-Ranges: bytes
   X-Timestamp: 1485370192.68560
    X-Trans-Id: tx1876d2ebf13246c7999f6-0058890d3d

Revision history for this message
clayg (clay-gerrard) wrote :

so the account of that object isn't "swift" - that's the "username" in keystone parlance I guess?

try:

swift-get-nodes /etc/swift/object.ring.gz AUTH_7047c4251ffd42b992a3984b71178a66 NEWTEST2 deserts.JPG

Revision history for this message
cindy b (cindybatt) wrote :

Thank you. That worked perfectly. One more question.... neither COSBench or ssbench have been updated for Keystone/openstack AUTH v.3. Do you know of someone or somewhere I can get a detailed difference between auth 2 and auth 3. I want to try to update one them so I can use it?

swift-get-nodes /etc/swift/object.ring.gz AUTH_7047c4251ffd42b992a3984b71178a66 NEWTEST2 deserts.JPG

root@hlxkvm037:~# swift-get-nodes /etc/swift/object.ring.gz AUTH_7047c4251ffd42b992a3984b71178a66 NEWTEST2 desert7.jpg

Account AUTH_7047c4251ffd42b992a3984b71178a66
Container NEWTEST2
Object desert7.jpg

Partition 40
Hash 28b4ef25872ffcce9c7bddf49189b7ee

Server:Port Device 135.165.181.200:6000 1
Server:Port Device 135.165.181.198:6000 1
Server:Port Device 135.165.181.201:6000 1
Server:Port Device 135.165.181.199:6000 1
Server:Port Device 135.165.181.202:6000 1

curl -I -XHEAD "http://135.165.181.200:6000/1/40/AUTH_7047c4251ffd42b992a3984b71178a66/NEWTEST2/desert7.jpg"
curl -I -XHEAD "http://135.165.181.198:6000/1/40/AUTH_7047c4251ffd42b992a3984b71178a66/NEWTEST2/desert7.jpg"
curl -I -XHEAD "http://135.165.181.201:6000/1/40/AUTH_7047c4251ffd42b992a3984b71178a66/NEWTEST2/desert7.jpg"
curl -I -XHEAD "http://135.165.181.199:6000/1/40/AUTH_7047c4251ffd42b992a3984b71178a66/NEWTEST2/desert7.jpg"
curl -I -XHEAD "http://135.165.181.202:6000/1/40/AUTH_7047c4251ffd42b992a3984b71178a66/NEWTEST2/desert7.jpg"

Use your own device location of servers:
such as "export DEVICE=/srv/node"
ssh 135.165.181.200 "ls -lah ${DEVICE:-/srv/node*}/1/objects/40/7ee/28b4ef25872ffcce9c7bddf49189b7ee"
ssh 135.165.181.198 "ls -lah ${DEVICE:-/srv/node*}/1/objects/40/7ee/28b4ef25872ffcce9c7bddf49189b7ee"
ssh 135.165.181.201 "ls -lah ${DEVICE:-/srv/node*}/1/objects/40/7ee/28b4ef25872ffcce9c7bddf49189b7ee"
ssh 135.165.181.199 "ls -lah ${DEVICE:-/srv/node*}/1/objects/40/7ee/28b4ef25872ffcce9c7bddf49189b7ee"
ssh 135.165.181.202 "ls -lah ${DEVICE:-/srv/node*}/1/objects/40/7ee/28b4ef25872ffcce9c7bddf49189b7ee"

note: `/srv/node*` is used as default value of `devices`, the real value is set in the config file on each storage node.
root@hlxkvm037:~# cd /opt/swift/data/1/objects
root@hlxkvm037:/opt/swift/data/1/objects# ls
10 121 13 138 146 165 17 176 186 19 196 201 204 216 224 239 244 252 27 31 37 4 42 5 59 70 80 90 97
103 126 131 141 154 166 172 182 188 192 2 202 206 218 225 24 247 254 28 33 38 40 43 55 64 71 82 91 98
11 129 136 142 157 168 174 185 189 194 200 203 21 22 227 240 249 26 29 35 39 41 45 56 65 72 84 95

root@hlxkvm037:/opt/swift/data/1/objects# cd 40
root@hlxkvm037:/opt/swift/data/1/objects/40# ls
2a5 7ee hashes.invalid hashes.pkl
root@hlxkvm037:/opt/swift/data/1/objects/40# cd 7ee
root@hlxkvm037:/opt/swift/data/1/objects/40/7ee# ls -al
total 0
drwxr-xr-x 3 swift swift 53 Jan 25 18:49 .
drwxr-xr-x 4 swift swift 100 Jan 26 00:32 ..
drwxr-xr-x 2 swift swift 79 Jan 25 18:49 28b4ef25872ffcce9c7bddf49189b7ee

Revision history for this message
clayg (clay-gerrard) wrote :

> Do you know of someone or somewhere I can get a detailed difference between auth 2 and auth 3.

I do not, sorry. Try the OpenStack mailing list [1] or #openstack-keystone on Freenode [2] or the OpenStack Q&A site [3]

1. https://wiki.openstack.org/wiki/Mailing_Lists#General_List
2. https://wiki.openstack.org/wiki/IRC
3. https://ask.openstack.org/en/questions/

Changed in swift:
status: New → Invalid
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.