The responses will usually come from a group (client manager or data manager) and will, if we implement the Disjoint Groups RFC, be routed from group to group, accumulating in every hop. The group with the client and the relay node will be the last hop there, and it will be the only group that is fully known to the client and the relay node, so it has to accumulate and sign the message.
(It has to do that in either case, of course, whether there is a relay node or the client is connected directly.)
I think this is the confusion, the last group members will pass incoming relay messages to the relay node in the group for the client. They will just pass-through these and not accumulate of refresh to the group, unless it’s a PutResponse The client will accumulate, or at least there is an argument that it should perhaps.
Sorry, I think I’m still unclear about the scenario here: If the group receives a response from a group (client or data manager) for the client, it has to accumulate and sign it, doesn’t it?
If it receives a message from a single node for the client (which currently cannot happen), it can just pass it on, of course. But it could also do that if the client were connected directly.
I am not so sure it does actually, if the group give the client a message signed by another group from our group what would be the issues? Are you thinking of a malicious node in our group, faking group messages?
Somehow the client will need to verify that the signatures are from a legitimate group in the network. My view was that we’ll achieve that by keeping the client up-to-date regarding the members of the group it is connected to (e.g. by sending it all link blocks), so it could verify any message signed by that group.
If the message is signed by a different group, how does the client know that those signatures belong to that group’s members or to any valid nodes at all?
For example, yes: What if a node sent a message with signatures from keypairs that it generated by itself, and claimed that those where the keys from a different group?
That’s similar to the original reason for moving to group-to-group hops: Signatures are only useful if you know the signatory, and somehow we need to define that web of trust in a way that the client receives a message signed by nodes that it knows.
Yes this may be a “thing” that is more possible when we get into the link blocks and locked chains etc.
Agreed, this is related to the first part. So consideration of what sending a link block means and is it better than a last group signatory ring?
Possibly an optimisation I suppose, so then we go back to your point that for now the last group do have to sign the message, which will be wasteful right now.
Get’s should be fine though I imagine, Put and Post need looked at deeper though. I think they may also be Ok, but needs a sequence diagram for each.
just some small thoughts:
1, from the user experience perspective, I think something predictable will be preferred.
When the work proof is based on counting the PutResponse relayed, the waiting duration will be difficult to predict. Probably, we can have a time limit as well. (say 1 hour and 10 Put responses whichever reached first)
2, Relaying a message won’t create much work for the joining node to do, which means an attack may launch many nodes with limited cost. A more cheap-to-network-but-costly-to-client approach can be: each group member sends out a hash-puzzle to the client regularly. That puzzle even don’t need to be synced among group members. If a member thinks the joining one can’t resolve the puzzle properly, it can remove from the waiting list. If enough members removed the joining one, the connection can be rejected.
3, The main purpose of the RFC is to make an attack pay cost(money and time) to connect. Maybe we can achieve this by two stages: stage 1 pay-for-join and time-to-connect; stage 2 pay-for-join and the hash-puzzle triggered by put_request only.
I agree, but on the other hand safecoin mint for the group will be dynamic, which is good as the network will get more nodes when it needs. So dynamic can be good in these cases.
I think here the cost is what needs analysis. For instance in testnets we have seen relaying for clients really hurting nodes. Relay for client can be a lot of work. Quantifying that more clearly will help the RFC though I think.
A POW that helps the network would be a lot better here, the idea is sound, but I feel any POW should be beneficial and not an energy waste.
yes, dynamic is good. that’s why I suggested a time limit AS WELL.
Our current client’s accessing node is a full node and handles many stuff in addition to relay msgs only.
I guess the relay node in this RFC is about relaying only? as that node is not trusted at the beginning so supposed not to be used as a full accessing node for a client?
POW helps the network is beneficial. But we shall also consider/balance the traffic/complexity it may incur.
Yes this is true, however a client will connect to multiple of these (all the relays in a group) and we trust none of them. That is Ok and better than now (where we trust 1) as we can measure responses back and confirm the relay is working and able to work (deliver data etc.).
I saw this RFC is rejected. Curious to see the new one. Simple idea here:
Connect to the network as a Relay_Node. Make the group you connect to happy by doing a good job.
Connect to the network as a Vault at the same time. This group (different from your relay_group) will call the group you relay for to ask them if you’re doing a nice job. If you do you might store data and Farm coins otherwise you get rejected.
This way every node (Vault) on the network is forced to work in 2 complete different groups. Both acting as a relay_node and as a full Vault. A timer or a PUT on the relay side may be requested.
This way the load of forwarding data is balanced over all nodes, not just the relay_nodes.
This is not far off the couple of options we are drawing up. There was an all day hangout on Friday with the routing team and several simpler options are on the table. THis is close to them and we will pick it up again on Monday in another hangout. Thanks for the input.
It looks like a mix of
Test vault locally (self test)
Group tests vault
Vault is relocated.
We are considering the data holding as a separate component as well, as you point out here. It’s getting fleshed out now to make sure we have the simplest to implement approach (the most natural solution). Looking good though, plenty of options so all about the best option to go forward