I’m trying to get started writing applications for the SAFE Network in Rust.
I’ve tried finding a minimal standalone example or tutorial somewhere to get me started, but I haven’t been able to find anything. It seems like pretty much all information about using the APIs are about how to create a web browser application in one way or another.
Hi @lukas! Thanks for your interest in building Rust apps for SAFE
The API package you need to use is safe_app: safe_app - Rust
Unfortunately there’s no extensive tutorials on Rust API usage just yet, but actually the most complicated part about it is the initialisation and the entry point. You need to send a request to the Authenticator and obtain app keys to be able to communicate with the SAFE Network (even with its mock version). After that, the API stays similar to what you can find in NodeJS: it’s all about Mutable Data, Immutable Data, and so forth (just with a more idiomatic feeling).
Give us some time and in the following post I or @marcin will show you a minimal Rust example that establishes a connection with the Network.
Is there any minimal example I could look at? One example I thought of is a simple function checking if a MD or ID exists on the network. I started on a little example, but I didn’t get any further than this:
use safe_app::{App, Client, XorName};
use futures::future::Future;
fn main() {
let app = App::unregistered(|| (), None).unwrap();
app.send(|_client, _context| {
let xor = XorName::from_hex(
"1234567890123456789012345678901212345678901234567890123456789012"
).unwrap();
_client.get_mdata(xor, 4001).wait().unwrap();
None
}).unwrap();
}
This just hangs (with use-mock-routing). This example is the most minimalist example I could think of that establishes a connection to the network.
No minimal ones at this time, unfortunately, but there’s a client_stress_test example which shows more-or-less complete functions.
As for your specific example code, there are several things to it:
You need to make sure to run the future in the context of the core event loop. That means you need to return _client.get_mdata(...).into_box().into() as a result for the closure instead of None (which means that no future needs to be executed). So in this case fut.wait() will just wait indefinitely because it’s not aware of the core event loop context.
You need to make sure the future is actually executed before your program is terminated. See, given the asynchronous nature of this, the app.send(...) call will return the execution back as soon as possible - and right after we do app.send, the program shuts down itself, including the execution of the core event loop. So to continue execution properly, we need to wait for it, and we usually use std::sync::mpsc::channel: we trigger the sender end of the channel once the future is executed (tx.send(..)), and the receiver just waits for a signal infinitely (rx.recv()).
We have a helper function that encapsulates this pattern, actually – but you might need to compile SCL with a flag --features testing to use it. You then can simply get the result of execution, e.g.: let res = run(|client| client.get_mdata(xor, 4001)).
Sorry this is all not well documented, and I can see the need for functions like run to be a part of the public API. That’s what we’re working on now, improving the Client Libs documentation as the first step.
Overall, your example would turn out to be something like this:
use futures::future::Future;
use safe_app::{App, Client, FutureExt, XorName};
use std::sync::mpsc;
fn main() {
let app = App::unregistered(|| (), None).unwrap();
let (tx, rx) = mpsc::channel();
app.send(move |_client, _context| {
let xor =
XorName::from_hex("1234567890123456789012345678901212345678901234567890123456789012")
.unwrap();
let tx2 = tx.clone();
_client
.get_mdata(xor, 4001)
.map(move |res| {
println!("{:?}", res);
tx.send(()).unwrap()
})
.map_err(move |e| {
println!("ERROR: {:?}", e);
tx2.send(()).unwrap()
})
.into_box()
.into()
})
.unwrap();
rx.recv().unwrap();
}
Thanks a lot, Nikita! I really appreciate your quick and extensive answer on this. I will learn more about Futures.
I think such self-standing examples are great. A collection of such examples would hopefully turn into much-used idioms. I’m excited to see how the client libraries can be improved upon! These examples help in designing the APIs too!
I have a question about this specific example. There seems to be a huge difference between running with --release and without. What might be the reason behind this?
With:
$ time cargo run --release --features "use-mock-routing"
Finished release [optimized] target(s) in 0.29s
Running `target/release/safe-cli`
ERROR: No such data - CoreError::RoutingClientError -> NoSuchData
real 0m0.936s
user 0m0.771s
sys 0m0.117s
Without:
$ time cargo run --features "use-mock-routing"
Finished dev [unoptimized + debuginfo] target(s) in 0.28s
Running `target/debug/safe-cli`
ERROR: No such data - CoreError::RoutingClientError -> NoSuchData
real 0m29.323s
user 0m29.130s
sys 0m0.123s
Agreed! As an aside, the examples were originally written a couple years ago, when the Client Libs API looked much different – we didn’t even have the safe_core/auth/app split we have today. We re-wrote the examples this past summer and were able to simplify them hugely, a testament to how much progress we have made in making the APIs simple and usable.
That’s to be expected. We briefly mention this in the SCL readme:
We run tests in release mode (indicated by the --release flag) in order to catch rare FFI bugs that may not manifest themselves in debug mode. Debug mode is also unoptimized and can take an inordinate amount of time to run tests.
For these reasons, we encourage always running with --release mode. We’ll add this information to the Building from Source section of the readme as well.
It would also be nice to have a section about the examples, as they are very helpful but we don’t even mention them in the readme! I’ve added this to our documentation task list.
I’ve just looked it up and indeed seems much different! Nice to see everything evolve in that way.
Thanks, that makes sense!
I have one more question. This is regarding the run duration with --release of test_create_app_with_access. I made the following test (just as a quick exercise to call an FFI function that has a callback):
#[test]
fn it_might_work() {
use ffi_utils::FfiResult;
use safe_app::App;
use safe_app::test_utils::create_auth_req;
use safe_core::btree_set;
use safe_core::ipc::Permission;
use std::collections::HashMap;
let mut container_permissions = HashMap::new();
container_permissions.insert("_public".to_string(), btree_set![Permission::Read]);
let auth_req = create_auth_req(None, Some(container_permissions));
let auth_req = auth_req.into_repr_c().unwrap();
extern "C" fn call(_user_data: *mut std::os::raw::c_void, _error: *const FfiResult, _o_app: *mut App) {
}
unsafe {
safe_app::ffi::test_utils::test_create_app_with_access(&auth_req, std::ptr::null_mut(), call);
}
}
The result is this:
$ time cargo test --release --features "use-mock-routing testing"
Finished release [optimized] target(s) in 0.27s
Running target/release/deps/decorum-9da058a1f64426ec
running 1 test
test tests::it_might_work ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
real 0m6.357s
user 0m5.705s
sys 0m0.626s
I found this when I was running a test using the loginForTest from safe_app_nodejs; it took more time than I expected. Tracing that function I saw it is calling the test_create_app_with_access function, so that’s the reason I made the above test.
Edit: Hmm, I just tried the loginForTest on Windows and I do not have the same issue (as the above was tried on Arch Linux). Perhaps this is a bug? Can someone confirm?
Hmm, I tried this on OSX and it did not take too long:
real
0m0.792s
user
0m0.474s
sys
0m0.225s
Unfortunately, we do not officially support Arch Linux. It isn’t feasible for us to provide support and troubleshooting for every Linux distribution out there
If you can reproduce this on a supported platform (Windows, OSX, or Ubuntu x64) then please file an issue on GitHub so we can track it properly. We encourage you to do so even if it turns out not to be a bug – as long as it is a platform that we are able to provide support for.
I fully understand and even support that! I will see if there is anything I can find that would be reproducable on a supported platform. Perhaps someone could try on Ubuntu?
update: I tried the example again today, and it ran much faster, but still a huge delay (2s) versus Windows. This was a more pure example I could make that was producing the delay too (by tracing the calls made from within each function):
use rand::{self, Rng};
use safe_authenticator::Authenticator;
fn main() {
let mut rng = rand::thread_rng();
let locator: String = rng.gen_ascii_chars().take(10).collect();
let password: String = rng.gen_ascii_chars().take(10).collect();
let invitation: String = rng.gen_ascii_chars().take(10).collect();
let _auth = Authenticator::create_acc(
locator,
password,
invitation,
|| (),
).unwrap();
}
I did run strace and I found the call that took much longer was futex().
Then I decided to install Ubuntu and run it there — no delay. Then I switched back to Arch Linux and it ran fine too! So I’m not sure what was going on.
Good to hear that it runs fine on Ubuntu. Could be that you’re seeing a peculiarity of Arch or something odd in the way your instance is configured. There’s a lot of different permutations in the Linux world.
My theory is that this was caused by secrets derivation [e.g. here] not being optimized for that particular system for whatever reason. Secret derivation in debug mode takes 10x as long or longer than in release mode – it’s a very resource-intensive process where optimization makes a big difference, so it would be my guess as to the cause of the slowdown here.
Not something we’ll investigate for now but noting this down in case the issue crops up again
@lukas, you’re trying to use the rust apis, right?
I’d suggest having a look at the tests within the apis themselves to get a better idea. (Note the doctests are run with mock-vault and fake-auth functionality. you can see which tests are intended for which feature flag here. (though of course you can use those APIs normally, just for the tests certain flags are needed)