Kerberos NFSv4 mounts from FreeBSD server fails with "mount.nfs4: mount system call failed"

Bug #1320658 reported by Elias Martenson
30
This bug affects 5 people
Affects Status Importance Assigned to Milestone
nfs-utils (Ubuntu)
Confirmed
High
Unassigned

Bug Description

This problem started to appear after upgrading to 13.10. After another upgrade to 14.4, the problem persists. I also tried a fresh installation of 14.4, with the same result. Anoter client, still on an older Ubuntu release can still successfully mount this partition, and OSX also has no problems with it.

After the upgrade, attempting to mount an NFS partition using NFSv4, and the sec=krb5p option consistently yields the following message:

mount.nfs4: mount system call failed

The file /var/log/syslog shows the following:

May 18 23:35:50 tiger rpc.gssd[31127]: ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
May 18 23:35:50 tiger rpc.gssd[31127]: WARN: failed to free lucid sec context
May 18 23:35:50 tiger kernel: [438617.037214] NFS: nfs4_discover_server_trunking unhandled error -121. Exiting with error EIO

I found the following email thread on the NFS developers list that could possibly be related:
http://www.spinics.net/lists/linux-nfs/msg37604.html

The thread has 15 or so messages in it, and in the end they solve it with a small fix. Of course, I can't be sure that it's actually the same error, but there is lots of evidence in the thread that points to it.

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: nfs-common 1:1.2.8-6ubuntu1
ProcVersionSignature: Ubuntu 3.13.0-24.46-generic 3.13.9
Uname: Linux 3.13.0-24-generic x86_64
NonfreeKernelModules: nvidia
ApportVersion: 2.14.1-0ubuntu3
Architecture: amd64
CurrentDesktop: GNOME
Date: Sun May 18 23:43:13 2014
InstallationDate: Installed on 2010-07-23 (1394 days ago)
InstallationMedia: Ubuntu 10.04 LTS "Lucid Lynx" - Release amd64 (20100429)
SourcePackage: nfs-utils
UpgradeStatus: Upgraded to trusty on 2014-05-04 (13 days ago)

Revision history for this message
Elias Martenson (lokedhs) wrote :
Revision history for this message
Jonathan Hogg (jhogg41) wrote :
Download full text (6.9 KiB)

I'm having the same problem (albeit with Linux Mint 17 RC).

Full gssd session output:

jhogg@delves ~ $ sudo rpc.gssd -fvvv
beginning poll
destroying client /run/rpc_pipefs/nfs/clnt4b
Closing 'gssd' pipe for /run/rpc_pipefs/nfs/clnt49
destroying client /run/rpc_pipefs/nfs/clnt4a
destroying client /run/rpc_pipefs/nfs/clnt49
handling gssd upcall (/run/rpc_pipefs/nfs/clnt56)
handle_gssd_upcall: 'mech=krb5 uid=0 service=* enctypes=18,17,16,23,3,1,2 '
handling krb5 upcall (/run/rpc_pipefs/nfs/clnt56)
process_krb5_upcall: service is '*'
Full hostname for 'beale.cse.rl.ac.uk' is 'beale.cse.rl.ac.uk'
Full hostname for 'delves.cse.rl.ac.uk' is 'delves.cse.rl.ac.uk'
No key table entry found for DELVES$@CSE.RL.AC.UK while getting keytab entry for 'DELVES$@'
No key table entry found for <email address hidden> while getting keytab entry for 'root/delves.cse.rl.ac.uk@'
Success getting keytab entry for 'nfs/delves.cse.rl.ac.uk@'
Successfully obtained machine credentials for principal '<email address hidden>' stored in ccache 'FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK'
INFO: Credentials in CC 'FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK' are good until 1400526145
using FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK as credentials cache for machine creds
using environment variable to select krb5 ccache FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK
creating context using fsuid 0 (save_uid 0)
creating tcp client for server beale.cse.rl.ac.uk
DEBUG: port already set to 2049
creating context with server <email address hidden>
DEBUG: serialize_krb5_ctx: lucid version!
prepare_krb5_rfc4121_buffer: protocol 1
prepare_krb5_rfc4121_buffer: serializing key with enctype 18 and size 32
ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
WARN: failed to free lucid sec context
doing downcall lifetime_rec 35999
handling gssd upcall (/run/rpc_pipefs/nfs/clnt56)
handle_gssd_upcall: 'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 '
handling krb5 upcall (/run/rpc_pipefs/nfs/clnt56)
process_krb5_upcall: service is '<null>'
Full hostname for 'beale.cse.rl.ac.uk' is 'beale.cse.rl.ac.uk'
Full hostname for 'delves.cse.rl.ac.uk' is 'delves.cse.rl.ac.uk'
No key table entry found for DELVES$@CSE.RL.AC.UK while getting keytab entry for 'DELVES$@'
No key table entry found for <email address hidden> while getting keytab entry for 'root/delves.cse.rl.ac.uk@'
Success getting keytab entry for 'nfs/delves.cse.rl.ac.uk@'
INFO: Credentials in CC 'FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK' are good until 1400526145
INFO: Credentials in CC 'FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK' are good until 1400526145
using FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK as credentials cache for machine creds
using environment variable to select krb5 ccache FILE:/tmp/krb5ccmachine_CSE.RL.AC.UK
creating context using fsuid 0 (save_uid 0)
creating tcp client for server beale.cse.rl.ac.uk
DEBUG: port already set to 2049
creating context with server <email address hidden>
DEBUG: serialize_krb5_ctx: lucid version!
prepare_krb5_rfc4121_buffer: protocol 1
prepare_krb5_rfc4121_buffer: serializing key with enctype 18 and size 32
ERROR: GSS-API: error in gss...

Read more...

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in nfs-utils (Ubuntu):
status: New → Confirmed
Changed in nfs-utils (Ubuntu):
importance: Undecided → High
Revision history for this message
Longina Przybyszewska (longina) wrote :

I hit the same problem on Ubuntu 14.04
 uname -a
Linux yoda 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

handling gssd upcall (/run/rpc_pipefs/nfs/clnt12)
handle_gssd_upcall: 'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 '
handling krb5 upcall (/run/rpc_pipefs/nfs/clnt12)
process_krb5_upcall: service is '<null>'
Full hostname for 'eta.nat.c.sdu.dk' is 'eta.nat.c.sdu.dk'
Full hostname for 'yoda.nat.c.sdu.dk' is 'yoda.nat.c.sdu.dk'
Success getting keytab entry for 'YODA$@'
INFO: Credentials in CC 'FILE:/tmp/krb5ccmachine_NAT.C.SDU.DK' are good until 1404348760
INFO: Credentials in CC 'FILE:/tmp/krb5ccmachine_NAT.C.SDU.DK' are good until 1404348760
using FILE:/tmp/krb5ccmachine_NAT.C.SDU.DK as credentials cache for machine creds
using environment variable to select krb5 ccache FILE:/tmp/krb5ccmachine_NAT.C.SDU.DK
creating context using fsuid 0 (save_uid 0)
creating tcp client for server eta.nat.c.sdu.dk
DEBUG: port already set to 2049
creating context with server <email address hidden>
DEBUG: serialize_krb5_ctx: lucid version!
prepare_krb5_rfc4121_buffer: protocol 1
prepare_krb5_rfc4121_buffer: serializing key with enctype 18 and size 32
ERROR: GSS-API: error in gss_free_lucid_sec_context(): GSS_S_NO_CONTEXT (No context has been established) - Unknown error
WARN: failed to free lucid sec context
doing downcall lifetime_rec 36000
destroying client /run/rpc_pipefs/nfs/clnt13
Closing 'gssd' pipe for /run/rpc_pipefs/nfs/clnt12
destroying client /run/rpc_pipefs/nfs/clnt12

Revision history for this message
Elias Martenson (lokedhs) wrote :

This problem still exists in 14.10.

Revision history for this message
Elias Martenson (lokedhs) wrote :

I have also performed a full reinstalling of a 14.10 system, confirming that the bug still exists. It would be interesting to know if anyone has ever been able to perform a krb5p mount on Ubuntu since 13.10.

Revision history for this message
koen_92 (koenvaningen) wrote :

I can also confirm this bug and it is kernel related: Everything is working fine on the default 12.04 kernel (3.2), but once you upgrade to the trusty-backport kernel (3.14), this problem occurs.

Revision history for this message
Elias Martenson (lokedhs) wrote :

This problem is still blocking for me (I had to migrate most machines to other distributions, but my workstation is still Ubuntu)

Linux tiger 4.2.0-27-generic #32-Ubuntu SMP Fri Jan 22 04:49:08 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

I find it quite remarkable that secure NFS has been completely broken for a few years now. Does no one actually use it?

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.