As far as I understand GET request will be free but this will lead to huge vulnerability to DDOS attack by mirai botnets and alike.
Simple GET request will trigger a flurry of requests between the nodes in order to fetch the data, reward the farmer, etc.
For example the requester will need to spend 40 000 CPU cycles to generate and send the request and the network will spend 400 000 if not even 4 000 000 cycles to serve it.
Probably it may not bring the network down but seems like quite a leverage. Are the designers of the safenetwork OK with that “problem”?
I suggest you search the main forum for DDOS and also caching. This attack has been discussed there along with some of the measures proposed for mitigating it.
Yes this is a community forum https://safenetforum.org/ topic and as @happybeing says there is plenty of information and discussions there.
This forum is designed for developers to discuss APP and core development issues. And the community forum discusses a range of issues, attacks, technical, design ideas/issues. The community forum is a wealth of information, speculation and more.
I hope you understand.
happybeing, rob, thank you for your reply. I totally understand that you are probably fed up with same questions popping again and again but there is very good reason for that - there is simply not enough documentation about safe network which can be used as authoritative reference. It is even worst - there are several places where you can find outdated/conflicting info.
I am trying to develop a Java API for safenetwork I am am very frustrated that there is no documentation (at that lasts more than a year).
Frankly, with such basic question(s) I am trying to establish if this project matured enough in order to invest time and effort in it.
OK, the basic ddos protection is the caching what will be enabled (or already is) that will mean DDOS attacks see the requested chunks cached closer and closer to the attacking machines.
Also relay nodes will only allocate a portion of their own bandwidth to the client machines it is relaying for.
So an attacking node may have a few relay nodes servicing it and each has limits on the bandwidth each is allowing for that machine. Thus if an attacking computer is after just one chunk then only one relay node is in use and the caching will see the first node being contacted caching the requested chunk and the limit on bandwidth also prevents clogging up the relay node of the node supplying the data.
If the attacking node is requesting hundreds/thousands of different chunks then a few (maybe 3) relay nodes are relaying to the first hop nodes and thus the total bandwidth that attacking node is allowed is the sum of the 3 relay nodes limited bandwidth. And since a lot of different chunks than each of those relay nodes connect to maybe 50 first hop nodes because of the routing paths to the chunks thus 150 first hop nodes caching the hundreds/thousands of different chunks being requested. And because the relay nodes are bandwidth limiting the attacking node each of those first hop nodes caching the chunks will not be loaded.
Thus ddos is very much diluted and the attack spread out across the network absorbing the attack. Barely can call it a ddos at that stage. Obviously this has to be tested to get the actual effect, but I think you can see that to pull off a real “multiplying” ddos attack you need a very large number (compared to size of the network) of attacking nodes to really start loading down the network. But also unlike a ddos attack this is only slowing things down rather than denying service to any particular piece of data or site.
Hey @ogrebgr! Glad to see your interest in Java APIs for the SAFE network. There is an ongoing safe_app_java project. We’re just resolving some Android JNI issues and we also have some of the API documentation in place. You could have a look at what’s happening on the project board.
Hi @lionel.faber,
Thank you for the link to the board. I was looking at the sources last modification dates and got somewhat desperate seeing that last change was 9 months ago…
Ah! You’ll find the recent work at this fork of the project. We’ve done some refactoring and we’re sorting out some of the issues before sending it into the main repo.