Discussion topic for RFC 46 – New Auth Flow
Sorry to beat a dead horse, but I’m a command line guy who likes bash and cron and computers with no keyboard, mouse, or monitor. Headless systems.
Are there plans to allow apps to get their credentials from a file, with no user interaction?
I know this will just be for geeks, but if anybody wants to do IOT type projects, they will need their devices to access SafeNet upon power cycle, without user interaction.
When I look at this, it appears to almost address my concerns, but the initial granting of the permission in the first place is still confusing to me, in a headless environment.
Authorisation flow overview
- When an App asks to access the user’s area, the Authenticator must prompt the user about granted access as requested. This may be a multi-step process if the App request unusual permissions, but should otherwise be a one-click procedure.
- If the App has not accessed the user’s data before, the Authenticator must create a new random-key-pair for the App, store its sign key in the users Session Paket, which lets vaults know the key is valid to sign in the user’s name. It then creates a new container for the app at another random location.
- The user’s sign key and app’s sign key are added to the permissions of said container with full access rights and returned to the app
- The app can now access the network and those resources directly with the given credentials.
Wouldn’t it be good (if possible) to be able to set an expiration date for a token which is by default infinite but the user can choose (individually for each app when authorizing it) to be automatically revoked after 2hrs., 12 hrs., 30 days, etc.?
If the user is just testing/giving it a try to an app but he/she may decide to not use it afterwards, it could be good for safety reasons to have that app automatically revoked.
At least be able to easily edit an understandable list of APPs and the authorisations given
Yes, one key concern I had in this new flow is that this will be possible: with this flow an app needs to authenticate once only and then can store the authentication token however it likes. It is intended to allow exactly the IoT-case where you grant access once but then don’t want to have any further interactions.
My current thinking of that is, that we probably want to provide a “device manager”-kinda app in the mid-term, which acts similarly as the browser does today, but that it registers devices (rather than web-apps) against the authenticator. In my dream scenario, you’d put your IoT somewhere on the network, start this device manager on your laptop (and hopefully later mobile device), it discovers the new device in the network, registers it with the authenticator and hands over the access information to the device. This could equally work for devices in a data-center (aka servers).
As per “a head-less authenticator”: while that would potentialy be possible, considering the complexity of the UI, you can’t really offer this kind of complexe workflow with command-line-arguments. And we do not want to store user credentials (ever!) or want them as command line arguments (bash-history-hacking!), so we would have to build the entire UI again in ncurses or something - which will be a lot of work, especially trying to maintain it. We are a small team and one important aspect of this authenticator is, that it will actually feature the exact code base for all desktop and mobile platforms. Building and mainting a second UI is a lot of work for very little gain.
That said, with the step we will be moving all code that “actually” does something out of NodeJS and into a rust-binary for the authenticator, too and reduce JS to the browser side for rendering the UI. Thus, it shouldn’t be too hard to provide another UI - even from the community.
With this change the app-keys are now stored within the network itself and can only be revoked with user-app credentials (aka by the authenticator). We were briefly discussing whether we might want to expire keys, but agreeing on the current time in distributed systems is really hard and as we haven’t implemented that, nor cron-jobs yet, it’d be a lot of work - at the moment.
And while we discussed of maybe offering that from the authenticator side in that case, you’d run into the problem that you can’t offer that reliably either: the authenticator could be stopped before the time expired (especially on mobile) or the connection could be lost. And we do not want to offer security-relevant features in such an unreliable way. If the user told us this key should only be valid for 12h, we want to be damn sure it can’t be used after. But we can’t really offer this now, so we don’t want to give the false impression we might.
Thanks @ben, yes I suspected this could be a problem, and I agree it’s not good to have the authenticator to expire keys.
Just in case you are not aware, the TV Set-top-box devices work like this within a DOCSIS network, their are registered and a certificate is installed in the device which it then uses to communicate with the servers (CMTS).
.
Are there any conditional access mechanisms available or being considered other than using time? Such as, revoke after N-requests, or a general “Revoke all temporary grants to all apps (since login / ever)” etc.
Thinking from user side, I’m wondering what I might want, and some kind of granularity in revocation might be useful. As a user I came up with three broad categories of access scope to help me think this through:
-
Everything Scope - this scope is always as high value/risk as the most valuable data in the account. Users can know this, and in theory could manage risk and security with different accounts. For example, keeping one for housekeeping, general administration and small change, another account for Safecoin vault earnings or investigative journalism. I think that’s fine in theory but IMO for most people is hard to do, and so it would be better if value/risk could be managed by granularity of access scope within the account, for as many users and use cases as possible. However, as I understand it whatever we do will have a limit that is not going to be very secure, and that the only very secure boundary is going to be account login, because apps can’t be trusted to store credentials as well as “in the user’s head”, and not to secretly steal credentials from each other. This means extra care will be needed with everything scope, unless I’m wrong? This has occurred in the past . By extra care, I mean to ensure the user understands, knows what alternatives are available and finds them easy to make use of at the point this is realised (which might be long after the app is first installed or used).
-
App Specific Silo Scope - here value equates to the most valuable data that the app can generate / access / is provided by the user. So value/risk is very variable, but often well defined according to the nature of a given app, and so relatively easy for a user to estimate and understand, even in advance. Examples: password manager v invoice generator v calendar v contacts manager v alarm clock etc. This is not always the case though, see next…
-
Task or Data Value Scope - covers the case where value/risk relates directly to particular data or the task associated with it. In terms of apps, think of spreadsheet, word processor etc. where the value of data produced by the app, or which you want the app to access, can’t be known in advance, or is simply anything associated with a particular task. For a given task or data silo it will also be impossible to know which apps will need access to it in advance, so here authorisation is at the point of use, and perhaps per use, and not per app. Thinking in terms of tasks, we can have shopping (so shopping list), writing a product review for magazine article (so spreadsheets comparing products, word processor for drafts, article etc), or working on maidsafe’s next acquisition etc. . For many things default security is adequate, for others not so much, but only for very high value/risk things will the majority of users bother to take even the simplest precautions (look at Hillary Clinton’s email, and others who should know better at the top of government using services like Yahoo! even after they’ve been exposed as insecure…again!). I don’t know if that makes task/data silo access granularity pointless, or just means we need to make using such a facility very intuitive and as effortless as possible. I think it is worth trying though. It is not done well yet only because it is very hard to achieve (because it requires humans to think, and do work!) not because it isn’t very important. It is at the root of much of the surveillance and invasion of privacy we know about, and is what makes it hard for anybody to regain control. So for example we could consider the ability to lock data according to which folder it is in, and for permission to read and/or write to be required according to criteria that I have yet to think about. But for example, on access per session. Or in order to read. Or in order to write. Per folder, or even per file. Obviously though, no point providing features if they aren’t going to get used, so I think we need to consider usability almost as much as technical feasibility and theoretical value (by which I mean security achieved). Even if we can’t do this now, it may be worth thinking about in case their are things we’d like to implement in advance that ensure it can be achieved with least effort and burden for users later.
IMO usability is crucial in all of this. Everyone has to login to their SAFE account, but after that we know hardly anyone will put an ounce more effort into managing their risk and security than we actually enforce, and anything we enforce which causes the least bit of inconvenience will push users towards less secure alternatives which is bad.
In the end everyone wins by keeping as many users as possible more secure, so I think work hard to maximise usability, and within that do the best job we can to provide security.
Of course other mechanisms are considered, but this RFC doesn’t contain anything about automatic revokation, only about authentication and user-triggered revokation. While we might add any of those in the future, we haven’t discussed that yet, nor have we gained enough experience (aka having apps) to know what we’d be doing there. Automatic revokation is out of scope for this one.
Not sure where to put this but rather than start a topic I’ll try here…
Friendly Account Names
Quite a lot of us will have multiple accounts and at the moment we have no way to:
- know which we’re logged into
- or refer to them without potentially exposing the Account Secret (e.g. by writing down a list of the accounts we have set up somewhere)
An App could do this separately, but if we provide for it within the standard account metadata, and it is accessible to all apps, I think that it would be much more useful - each App would be able to show which account is active for example.
Example account names might be:
Daily
Wallet
Very Secret Stuff
These would have no meaning in themselves - just labels that the user can set and edit, with the value stored in the metadata for the account itself. So it is not an account “username”, you can’t type in “Daily” to say I’m trying to log into that account.
When SAFE Beaker, or any App, authorises with SN and retrieves account metadata, it would be able to show this somewhere in the UI:
- “Connected to SAFE Network / Unauthorised”
When you authorise, might become:
- “Connected to SAFE Network / Daily account”
This would require a way to “name” accounts, but this need not interfere with account creation at all, or it include an extra input field with default value of “unnamed account” for example, which can be overwritten or left as is. SAFE Beaker or other apps could provide UI to edit the name of the currently authorised account.
I doubt any of this needs API changes, but by providing for it from the start (e.g. in Beaker account creation UI, and providing an account renaming UI), and specifying the use of corresponding account metadata value in the API docs would allow apps to cater for users with multiple accounts in a coherent way.
I think an API modification is necessary to store the friendly name in the account. This is needed to retrieve this name when the user reconnects from another station. And even if the user reconnects from the same station I would advise against storing this info in the local disk to avoid traces after visiting the network.
BUT,
currently safe_core generates real locator and credentials from the user entered secret and password in a 2 steps process involving an intermediate (password, keyword, pin) triplet:
let (password, keyword, pin) = utility::derive_secrets(acc_locator, acc_password);
let acc_loc = Account::generate_network_id(&keyword, &pin)?;
let user_cred = UserCred::new(password, pin);
I am not able to link this rust code in safe_core to javascript code in safe_launcher, but I suppose that acc_locator variable corresponds to the user secret.
If this assumption is correct then I think there is a problem in this code because derive_secrets function generates a keyword and a pin that depend only on secret. This means that 2 users cannot use the same secret (like Mark for example). IMO this is clearly a bug, because anyone should be able to use Mark name. Correction is very simple: avoid collisions on the session packet by adding the password as a complementary element in the real locator (the session packed is a MD keyed by acc_loc variable).
If this bug is corrected (or my assumption is not correct, meaning that acc_loc key already depends on both secret and password) then we wouldn’t need a third element for friendly name and launcher could just display the first element of credentials. This element should be renamed to something like “account name” instead of “account secret” and anyone could use common names like Daily or Wallet without colliding with anyone else. The needed personal uniqueness and secrecy would be brought by the second element of credentials, aptly named “account password”.
No API modification is needed with this solution and an added advantage is that complexity requirements are to be checked only on the second element of credentials.
Not a bug but a feature. The location depends only on “secret” but this means that we must generate a unique secret. Otherwise if secret is something simple, like Mark, we decrease security and all responsibility rests solely on the password.
The javascript code is here.
You’re right it’s not a bug because Maidsafe coded it on purpose.
But initially they coded a pin depending on both locator and password on 19 July. See commit comment:
User supplies 2 secrets: Account locator and Account password. Each is hashed. Keyword is the hash of locator and password is the hash of account password. Pin is derived from the combination of two hashes and acts a salt. These 3 derivatives then work as usual internally (going via Scrypt etc.).
And then they explicitly changed their mind on 26 July with following commit comment:
Derive PIN exclusively from account locator - do not involve password in this process.
The problem is that I find the first version better:
-
With the second version, user must enter 2 password fields instead of traditional name + password fields. This is complex for user and doesn’t bring any added security. It’s only necessary to prevent user choosing common names to avoid collisions. This is not a good reason for me.
-
With the second version, some evolutions must be done on API to implement what is asked by @happybeing. With the first version, the entered name could be the friendly name itself. With the second version, the user must enter a third field during account creation, so that makes a total of 3 fields (2 passwords and a friendly name).
The second commit comment doesn’t explain why the pin was changed and I’m afraid these points were not considered.
Note: I call password a field whose value must meet complex requirements about length and kind of characters.
Don’t get me wrong, but isn’t this solely a UI feature of the authenticator? I do understand correctly that you do want distinctly separated accounts, right? Well, right now the authenticator only has single-account (per-session) support, but we are already pondering about allowing multi-profile support, where you could store a name for some login credentials and hold them in a master account (maybe?) and the UI would ask you which account it should grand access to when an App asks to connect. This is already possible today and will be possible in the next version and wouldn’t need any changes on the network itself.
This won’t be focus on this version, nor is it on the scheduled plans I know of (and therefore, if we want to continue the conversation, I’ll split it out of the RFC convo). However, with the new authenticator, it should be possible for the community to fork it or provide a wrapper, which can allow you to do this already today.
The entire conversation with all reasoning and explanations can be found here. It is a complex issue and I’d rather not open the discussion about it again here. Especially as it has nothing to do with this RFC.
Don’t get me wrong, but isn’t this solely a UI feature of the authenticator?
Quite so (you know better than I do ). I don’t mind how we get it, and your comments about multiple account handling go way beyond this. I’m just highlighting the issue, but as usual MaidSafe are way ahead of me
Incoming: Updates on the public Containers, Container encryption
While working on the container encryption last week, we realised that the previous usage of Nonce
s would break our key-lookup system. Thus we created a different scheme as outlined in this PR.
Secondly, the public container and many other details where specified very well yet. This PR restructures some of the information, defines the Container and a new LinksContainer
-Convention. By using this and other existing conventions, the RFC could be made much clearer and focus on the difference of each part rather than explaining the entire mutable structure every time.
(quotes are from last version linked by @ben)
rather than having a hierarchy of StructuredData pointing to “subdirectories”, we will flatten the structure into a single key-value-store mapping and emulate a file system like access on top of that.
If I understand correctly this means that there is no explicit notion of a directory, only an implicit one defined by the set of existing file names in a container. For example, a file named like A/B.txt defines an implicit directory A containing a B.txt file.
The problem I see is that there are only 100 entries in a MD so that means a whole file system cannot have more than 100 files, that’s not enough for me.
Where the key is a UTF-8-String encoded full path filename, mapping to the serialised version of either another Container in the network or a serialised file struct …
The Container should point to another container following the same convention as its parents - so at least NFS - or to a serialised file struct as described before.
These pointers to containers could be used to enlarge a file system, but then how is solved file name unicity?
In a single container, 2 entries A/B.txt cannot be created, but that’s impossible to control within several containers, as they are managed by different vaults.
Previous NFS with each directory stored in a SD didn’t have this problem.
Where did you read that? I didn’t know there was a 100 entries limit on MD. The only limit I know of is a total of 1MB and we want to lift that before ending alpha…
Secondly, we are already splitting the “root” up into multiple area containers, in order to be able to give specific access to just a subset of data. As the containers spec already mentions the 7 defaults and explicitly leaves open that the user might create more top-level directories in the future.
Well, one key problem we had with the SD is that it required tree traversal if we wanted to change the permissions. Though the explicitly allow containers to link to other containers, we’d still not traverse those and fix permissions but instead allow that as a usage pattern to “link” a key into a separate area. Think of adding a link to your publicly shared website from the app directly in the _public
-container so that you can let others know about this.
Regarding pure NFS, the only reason we even added it because of the current 1MB limit of MD but if this becomes a often required usage pattern, or we can’t lift that limit, we might be changing the implementation later on to provide for other patterns.
In the first iteration we will probably not make container-pointers transparent, but instead you’d have to explicitly ask for a key that exists and if that is a container, we return a container. Then you could do another lookup in there. So you’d have to say whether you want A/b.tx
or b.txt
within the container A
by first looking up A
and then the key b.txt
within it. But you’d have to know where you want to split it. If you explicitly expected a file at that location we might also tell you that there isn’t any file there (but a container).
Whether we’d do transparent traversal in a later stage within the NFS emulation hasn’t been decided yet. And while I could share what I consider the better approach for this, there will be the appropriate RFC and time to discuss it then.
In the limits section of the RFC
The MutableData data type imposes the following limits:
Maximum size for a serialised MutableData structure must be 1MiB;
Maximum entries per MutableData must be 100;
Not more than 5 simultaneous mutation requests are allowed for a (MutableData data identifier + type tag) pair;
Only 1 mutation request is allowed for a single key.