Apple Event: May 7th at 7 am PT

Looks like no one’s replied in a while. To start the conversation again, simply ask a new question.

Can't use mount_smbfs as root?

I have a launchd job that runs a shell script on a Snow Leopard server. The shell script backs up a share on another Windows server. Works great. Part of the script, of course, is mounting the share:


mount_smbfs -d 0500 -f 0400 //'domain;login:password'@server/share mountpoint


This works fine in Snow Leopard. The same code, when run as root on LION (as required for a system level launchd job) FAILS with an authentication error. The very same code works fine when run as a local admin user.


It seems root cannot use mount_smbfs on Lion systems? What am I missing here?

Mac mini, Mac OS X (10.7.2), Server OS

Posted on Nov 29, 2011 12:38 PM

Reply
40 replies

Apr 18, 2012 6:34 PM in response to jaydisc

jaydisc wrote:


1. I can't seem to recreate being able to mount, even with root now. Not sure what I did before, but I'm unable to replicate.

Long story with that one. You should be able to mount as root, but you really need to be root on the client and the server or have some other user mapping established. That is how NFS was traditionally done. Newer protocols can run in userspace and can use the keychain. That means no passwords in config files - always a good thing.


2. The problem with running an rmdir script is that if it DID NOT unmount successfully, you are now deleting files from the share.

Just doing "rmdir" without the "-rf" should be safe. If you are really worried, you could always add a ".dummy" to ensure that even an empty directory stays around.


Now, you keep mentioning autofs, but that's for a permanent mount, isn't it? My need is for a transient mount. I want to mount a share, back it up, and UNMOUNT it. Do you feel autofs is appropriate for this? How about an example command if so.


No. It is for a transient mounts. Here is the documentation for it: http://images.apple.com/business/docs/Autofs.pdf


When you need to access the server, just access the directory and it will be mounted automatically. When you are done with the directory, the mount should automatically go away. That could be a problem if you have some specific cleanup you want to do after unmounting.

Apr 18, 2012 11:07 PM in response to etresoft

No dice with that. I could get it working, but only outside of /Volumes and even then, I kept getting erroneous errors.


However, WHAT I DID get to work, was mounting as root using mount_smbfs into /Volumes. The weird thing is by running my script as root, IT WOULD NOT WORK, but as a launchd.plist, which I suspect most will use in this scenario anyway, it DID work. Weird, I know. I've wasted enough time with it that I'm going to cut my losses and roll with that for now, and maybe another day, dig a bit deeper into the how and why.

Apr 19, 2012 3:30 PM in response to etresoft

Not necessarily. You can declare the user for any launchd job, but in this case, I'm not declaring it, and thus, it is root. Therefore, I now firmly believe it is something environmental that makes the difference.


So, here is some output to hopefully validate what I'm talking about.


Firefly, I have a script named backup-exonet, as follows:



if test -d "$MOUNTDIR/$SHARE";

then

echo "$SHARE is available."

else

echo "$SHARE is not available. Mounting..."

mount_needed=1

fi



if test "$mount_needed";

then

mkdir "$MOUNTDIR/$SHARE"

sleep 1

if `mount_smbfs -o nobrowse "//$SERVERUSER:$SERVERPASS@$SERVER/$SHARE" "$MOUNTDIR/$SHARE"`;

then

echo "succeeded."

syncbackup

echo -n "Unmounting PC..."

umount "$MOUNTDIR/$SHARE"

echo "done."

else

echo "failed."

rm -rf "$MOUNTDIR/$SHARE"

fi

else

syncbackup

fi


FYI, $MOUNTDIR = "/Volumes"


So, when I just run that as root, it fails:


bash-3.2# whoami

root

bash-3.2# /usr/local/bin/backup-exonet

ExonetBackup is not available. Mounting...

mount_smbfs: server rejected the connection: Authentication error

failed.


So, I create a launchd job as follows:


<?xml version="1.0" encoding="UTF-8"?>

<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">

<plist version="1.0">

<dict>

<key>Disabled</key>

<true/>

<key>Label</key>

<string>au.com.example.backup-exonet</string>

<key>ProgramArguments</key>

<array>

<string>/usr/local/bin/backup-exonet</string>

</array>

<key>RunAtLoad</key>

<true/>

<key>StartCalendarInterval</key>

<array>

<dict>

<key>Hour</key>

<integer>14</integer>

<key>Minute</key>

<integer>30</integer>

</dict>

<dict>

<key>Hour</key>

<integer>19</integer>

<key>Minute</key>

<integer>30</integer>

</dict>

</array>

</dict>

</plist>


Note it's set to RunAtLoad, so I load up the system log to watch what happens when I load it and I load it:


bash-3.2# tail -f /var/log/system.log | grep backup-exonet &

[1] 26811

bash-3.2# launchctl load /Library/LaunchDaemons/au.com.example.backup-exonet.plist

bash-3.2# Apr 20 08:24:00 office au.com.example.backup-exonet[26815]: ExonetBackup is not available. Mounting...

Apr 20 08:24:01 office au.com.example.backup-exonet[26815]: succeeded.

Apr 20 08:24:01 office au.com.example.backup-exonet[26815]: building file list ...

Apr 20 08:24:01 office au.com.example.backup-exonet[26815]: done

Apr 20 08:24:02 office au.com.example.backup-exonet[26815]: sent 6064 bytes received 20 bytes 12168.00 bytes/sec

Apr 20 08:24:02 office au.com.example.backup-exonet[26815]: total size is 77918564190 speedup is 12807127.58

Apr 20 08:24:02 office au.com.example.backup-exonet[26815]: Unmounting PC...

Apr 20 08:24:02 office au.com.example.backup-exonet[26815]: done.


It works!


Happy to provide more info or do a different test if it helps anyone.

Apr 19, 2012 5:05 PM in response to jaydisc

If you think there is a difference in the environment, you can always include a call to "/usr/bin/env" to check. Speaking of which, you should always include the full path to any utilities in a script like this. Otherwise, things are liable to randomly fail depending on changes to the environment. Finally, if you read the man page for mount_smbfs, you will see that it is designed to run from a user's environment. Calling it from root and manually mounting in /Volumes is just asking for trouble.

Apr 19, 2012 7:04 PM in response to etresoft

I added that command to the script, but I'm not sure which variable makes the difference.


Here's when run directly by root:


TERM=xterm-color

SHELL=/bin/bash

USER=root

SUDO_USER=localadmin

SUDO_UID=501

USERNAME=root

MAIL=/var/mail/localadmin

PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin

PWD=/Library/LaunchDaemons

LANG=en_AU.UTF-8

HOME=/Users/localadmin

SUDO_COMMAND=/bin/bash

SHLVL=2

LOGNAME=root

SUDO_GID=20

_=/usr/bin/env


And here's when run by launchd:


PATH=/usr/bin:/bin:/usr/sbin:/sbin

PWD=/

SHLVL=1

_=/usr/bin/env


Considerably less, so I suspect one of root's variables seems to be causing the conflict. Any ideas which one?


As far as full paths, I don't use any 3rd party package managers or install anything on top of the vanilla install, so that's not really a concern for me. In environments where I might face that risk, I path accordingly.


I don't see anything in the man page for mount_smbfs that i would interpret as you have. Which specific text are you referring to? I do see one line that suggests I shouldn't call it regularly and should use mount -t instead. I will try to update my script to confirm that still works. I believe I tried that originally but had issues with some of the smbfs arguments, like nobrowse.


Also, you seem to regularly allude to the idea that mounting as root and/or in /Volumes is "asking for trouble". Do you have any official or unofficial source you base this upon? Or even a detailed reason as to why?

Apr 19, 2012 7:21 PM in response to jaydisc

jaydisc wrote:


I don't see anything in the man page for mount_smbfs that i would interpret as you have. Which specific text are you referring to?



The part where it says:

-N Do not ask for a password. At run time, mount_smbfs reads the

~/Library/Preferences/nsmb.conf file for additional configuration

parameters and a password. If no password is found, mount_smbfs

prompts for it.


Also, you seem to regularly allude to the idea that mounting as root and/or in /Volumes is "asking for trouble". Do you have any official or unofficial source you base this upon? Or even a detailed reason as to why?


I hack around with autofs quite a bit. I am currently waiting 10 minutes to see if OSXFUSE properly umounts my volumes. Any of those mount points defined in one of the autofs config files is controlled by a running, kernel-level process. You can't do normal things to those directories, even as root. If you define your own mount points, then the automounter won't interfere and software update will be less likely to blow away your hacks.


As for mounting as root, that is something that dates back to NFS days. That is how people used to use NFS. Security wasn't such a big deal then. Newer filesystems are designed to be controlled from a user environment. There is no expectation of having someone who is root on both the client and the server. Users are now expected to authenticate with the server under the server's terms.

Apr 19, 2012 7:39 PM in response to etresoft

I don't interpret the -N flag the same way, especially because that file can also be read in from /etc/nsmb.conf, which of course is anything BUT meant for a user's environment.


Unfortunately, I don't really understand the 1st paragraph of your explanation and/or how it implies that /Volumes is frowned upon or trouble. A software update isn't going to trample on my /usr/local/bin script. Instead I'd expect changes made in /etc/ (required for autofs operation) to have a MUCH higher risk of being trampled upon. Also, both NFS and AutoFS appear to me to be more generic UNIX methods for how to do things, and the UNIX way is not generally or consistently the OS X way (e.g. launchd), so I'm probably going to stick with what I interpret as the OS X way (/Volumes), but I do appreciate the suggestions.


And as to your 2nd paragraph, this is a backup script I'm implementing. Why would I want it to use a user's credentials? Perhaps you've assumed this is a user-level script or operation? Otherwise, I fail to understrand your point there as well.

Apr 20, 2012 8:10 AM in response to jaydisc

jaydisc wrote:


I don't interpret the -N flag the same way, especially because that file can also be read in from /etc/nsmb.conf, which of course is anything BUT meant for a user's environment.


Unfortunately, I don't really understand the 1st paragraph of your explanation and/or how it implies that /Volumes is frowned upon or trouble. A software update isn't going to trample on my /usr/local/bin script. Instead I'd expect changes made in /etc/ (required for autofs operation) to have a MUCH higher risk of being trampled upon.

Yes. That is my point.


As for /Volumes, you don't own that. That belongs to the Finder. It isn't a permissions issue, it is a conflict issue. If you mess around in there both your own scripts and the system scripts could have problems. It is better to stick to an area where there are no conflicts. It really doesn't make any difference, so why insist on /Volumes?


If you weren't already aware, all of the SMB code was re-written for Lion. Any assumptions you have about how mount_smbfs might work may no longer be valid. I am quite sure that Apple never tested mount_smbfs in the matter that you are attempting to use it. Whether it works or not is something you will have to discover on your own.


Also, both NFS and AutoFS appear to me to be more generic UNIX methods for how to do things, and the UNIX way is not generally or consistently the OS X way (e.g. launchd), so I'm probably going to stick with what I interpret as the OS X way (/Volumes), but I do appreciate the suggestions.


The "pure" OS X way would be to create an alias of the mounted volume, with user credentials saved in the keychain, and mount it when the user accesses it. The only approved way to do anything with SMB on OS X is through Active Directory.


The "pure" UNIX way of using root and a dedicated mount point will work too.


A "hybrid" way of using root and one of the Finder's mount points is likely to cause trouble.


And as to your 2nd paragraph, this is a backup script I'm implementing. Why would I want it to use a user's credentials? Perhaps you've assumed this is a user-level script or operation? Otherwise, I fail to understrand your point there as well.


I'm sorry. I don't know if it was with you or someone else, but I have had this exact, same argument with someone before. You can try whatever in the world you want to do. I will suggest ways to make it work, but if you won't accept those suggestions, then you have to make it work yourself. Obviously you aren't having much success with that.


Using root and SMB like this is simply not going to work. Backing up to SMB is simply not going to work. There are proven ways to access network resources. There are proven ways to perform backups. If you chose to invent new ways, the burden and responsibility is on you to make it work. If your backup doesn't work the day that you need it, you will have no one to blame but yourself.

Apr 20, 2012 8:17 PM in response to etresoft

What I've been asking you is where you have come to the opinion that /Volumes belongs to the Finder, as I've never seen or heard this before. As for conflicts, the only one I can imagine is if the Finder attempts to mount a volume of the same name, and if that's the case, it just adds a number to the end, e.g. the 1st mount would be /Volumes/Data, the next would just be /Volumes/Data 1. The Finder will ensure it does not conflict. I've continuously asked you for specific examples of potential issues, but you seemingly haven't provided any yet. "...is likely to cause trouble" just isn't cutting it for me.


I'm aware that Samba is no longer part of Lion, which I think we've all realized is likelyto be why this behavior has changed. How are you "quite sure" Apple never tested mount_smbfs?


As to my "success", you must not be reading what I've written, as I've now had great success, to my own issue and that of the original poster, which is that you CAN run mount_smbfs as root, it just has to be via launchd.


Also, your interpretation that I'm backing up to SMB also demonstrates to me that you either haven't been reading what I've written, don't understand it, or I have failed to communicate it properly. The script in question mounts a directory from a PC on my network. I then use rsync to bring that PC's relevant contents to a directory on the Mac, from which I then use "proven ways to perform backups".


Thanks again for your help, but without some meaningful examples, I'm going to take your advice and leave it. Cheers.


To the original poster, just use launchd, and it works! 😉

Apr 20, 2012 8:35 PM in response to jaydisc

jaydisc wrote:


What I've been asking you is where you have come to the opinion that /Volumes belongs to the Finder, as I've never seen or heard this before.


It seems pretty obvious to me.


As for conflicts, the only one I can imagine is if the Finder attempts to mount a volume of the same name, and if that's the case, it just adds a number to the end, e.g. the 1st mount would be /Volumes/Data, the next would just be /Volumes/Data 1. The Finder will ensure it does not conflict. I've continuously asked you for specific examples of potential issues, but you seemingly haven't provided any yet. "...is likely to cause trouble" just isn't cutting it for me.

The Finder will ensure there are no name conflicts, that's all. Tell me, have you ever seen any instance where there was a number appended to a volume name that didn't require a reboot to fix?


How are you "quite sure" Apple never tested mount_smbfs?

I said I'm quite sure Apple has never tested mount_smbfs in the manner that you are using it.


As to my "success", you must not be reading what I've written, as I've now had great success, to my own issue and that of the original poster, which is that you CAN run mount_smbfs as root, it just has to be via launchd.

But that is exremely odd. I have seen many instance where something failed via launchd, but never where it only worked in launchd. Your solution seems fragile.


The script in question mounts a directory from a PC on my network. I then use rsync to bring that PC's relevant contents to a directory on the Mac, from which I then use "proven ways to perform backups".

While that isn't as bad as what I thought you were doing, it does inspire me to ask again - why do you need root for that? If you are using rsync, why are you bothering with network mounts? Just use rsync.

Apr 20, 2012 8:51 PM in response to etresoft

> It seems pretty obvious to me


> The Finder will ensure there are no name conflicts, that's all. Tell me, have you ever seen any instance where there was a number appended to a volume name that didn't require a reboot to fix?


It's been my experience that when such a naming conflict does appear, ejecting both volumes, and re-mounting them in the order desired fixes it. In my case here, I'm absolutely sure there will be no conflict. It's good to know that's your only perceived issue with it, as I can live with that.


> I said I'm quite sure Apple has never tested mount_smbfs in the manner that you are using it.


That perplexes me, because my interpretation is that it's just a command line version of Finder -> Go -> Connect to Server, but perfect for automation or scripting.


.


As for conflicts, the only one I can imagine is if the Finder attempts to mount a volume of the same name, and if that's the case, it just adds a number to the end, e.g. the 1st mount would be /Volumes/Data, the next would just be /Volumes/Data 1. The Finder will ensure it does not conflict. I've continuously asked you for specific examples of potential issues, but you seemingly haven't provided any yet. "...is likely to cause trouble" just isn't cutting it for me.



How are you "quite sure" Apple never tested mount_smbfs?



As to my "success", you must not be reading what I've written, as I've now had great success, to my own issue and that of the original poster, which is that you CAN run mount_smbfs as root, it just has to be via launchd.

But that is exremely odd. I have seen many instance where something failed via launchd, but never where it only worked in launchd. Your solution seems fragile.


The script in question mounts a directory from a PC on my network. I then use rsync to bring that PC's relevant contents to a directory on the Mac, from which I then use "proven ways to perform backups".

While that isn't as bad as what I thought you were doing, it does inspire me to ask again - why do you need root for that? If you are using rsync, why are you bothering with network mounts? Just use rsync.


Apr 20, 2012 9:44 PM in response to etresoft

Rsync isn't on PC, and installing it, or modifying the PC in any other way isn't an option. All I've got are SMB credentials to the relevant PC app's backup directory, which I mirror to a directory on the Mac, and let the Mac's backup take care of it.


As to why I'm using the username and path combination I've chosen, here are the results of some tests using three different users and two paths


1. OD user with only privileges for the folder the PC is synced to. Launchd.plist is modified to add username key.


Test 1. Run the script in Terminal with /Volumes as the parent directory of the mount: FAIL (mount_smbfs: server connection failed: Broken pipe)

Test 2. Run the script in Terminal with /tmp as the parent directory of the mount: FAIL (mount_smbfs: server rejected the connection: Authentication error)

Test 3. Run the script using launchd with /Volumes as the parent directory of the mount: FAIL(mount_smbfs: server rejected the connection: Authentication error)

Test 4. Run the script using launchd with /tmp as the parent directory of the mount: FAIL (mount_smbfs: server rejected the connection: Authentication error)


2. Local administrator. Launchd.plist is modified to add username key.


Test 1. Run the script in Terminal with /Volumes as the parent directory of the mount: SUCCESS

Test 2. Run the script in Terminal with /tmp as the parent directory of the mount: SUCCESS

Test 3. Run the script using launchd with /Volumes as the parent directory of the mount: FAIL (mount_smbfs: server rejected the connection: Authentication error)

Test 4. Run the script using launchd with /tmp as the parent directory of the mount: FAIL (mount_smbfs: server rejected the connection: Authentication error)


3. Root. Launchd.plist is modified to REMOVE username key.


Test 1. Run the script in Terminal with /Volumes as the parent directory of the mount: FAIL (mount_smbfs: server connection failed: Broken pipe)

Test 2. Run the script in Terminal with /tmp as the parent directory of the mount: FAIL (mount_smbfs: server connection failed: Broken pipe)

Test 3. Run the script using launchd with /Volumes as the parent directory of the mount: SUCCESS

Test 4. Run the script using launchd with /tmp as the parent directory of the mount: SUCCESS


The command in the script is in the format:


/sbin/mount -t smbfs -o nobrowse "//$SERVERUSER:$SERVERPASS@$SERVER/$SHARE" "$MOUNTDIR/$SHARE"


So, the "authentication errors" seem erroneous to me as the credentials did not change across the tests. Only the $MOUNTDIR variable and which user was running the command.

Thus, as far as which user to run it as, the ONLY user that seems to work while using launchd, is root.


Now, while both /Volumes/x and /tmp/x work as a mount path, there is one major reason I've chosen to stick with /Volumes and that's how the umount command works. If I umount the volume from /Volumes/x, the umount command cleans it all up, removes the mount folder, voila. If I use umount to unmount the volume from a different directory (/tmp/x in my tests), IT DOES NOT clean up (i.e. delete) the mount point, thus leaving behind an empty folder, meaning I have to remove that folder in the script. So, what happens if the unmount fails, and I do that? Bang, I've delete the PC backup directory's contents. What if I am unable to delete it and the rsync command runs again? Bang, the Mac backup directory's contents get synced with an empty directory. So, using /Volumes seem to me to be a MUCH wiser choice.

Apr 21, 2012 1:50 PM in response to jaydisc

jaydisc wrote:


Rsync isn't on PC, and installing it, or modifying the PC in any other way isn't an option. All I've got are SMB credentials to the relevant PC app's backup directory, which I mirror to a directory on the Mac, and let the Mac's backup take care of it.


Could you go the other way and have the PC mount a volume on the Mac and copy its backup files when they change?


1. OD user with only privileges for the folder the PC is synced to. Launchd.plist is modified to add username key.


Test 1. Run the script in Terminal with /Volumes as the parent directory of the mount: FAIL (mount_smbfs: server connection failed: Broken pipe)

Test 2. Run the script in Terminal with /tmp as the parent directory of the mount: FAIL (mount_smbfs: server rejected the connection: Authentication error)


I'm not familiar with Open Directory so I can't do much other than speculate. However, I would speculate that:

You could define this mount in Open Directory and it would be available to the user,

It might work better if the mount point was somewhere in the user's home directory. Although there are unrestricted UNIX permissions on those directories, the more restrictive Open Directory permissions may not allow it.


As far as the other commands go, I can't get past the fact that you still haven't tried a normal mount yet. Have you tried doing it all as a normal user with Applescript maybe? That would not be an elegant solution, but it would be a useful debugging step. I don't think it is a good idea to implement a process using root and launchd that you already know is flaky.

Now, while both /Volumes/x and /tmp/x work as a mount path, there is one major reason I've chosen to stick with /Volumes and that's how the umount command works. If I umount the volume from /Volumes/x, the umount command cleans it all up, removes the mount folder, voila. If I use umount to unmount the volume from a different directory (/tmp/x in my tests), IT DOES NOT clean up (i.e. delete) the mount point, thus leaving behind an empty folder, meaning I have to remove that folder in the script.


Why is that? Could it be that /Volumes is a special place perhaps?


So, what happens if the unmount fails, and I do that? Bang, I've delete the PC backup directory's contents.


You can always do "rmdir" without a "-r" option.


What if I am unable to delete it and the rsync command runs again? Bang, the Mac backup directory's contents get synced with an empty directory. So, using /Volumes seem to me to be a MUCH wiser choice.


An even wiser choice is checking result codes and stopping your script on failure.

Can't use mount_smbfs as root?

Welcome to Apple Support Community
A forum where Apple customers help each other with their products. Get started with your Apple ID.