Solving Low PoW Value Problem


#1

Whisper uses PoW (Proof-of-Work) as the main mechanism to protect the network from DDoS. Simply, the higher PoW, the more work (calculations, CPU bounded) needs to be done. More: https://github.com/ethereum/go-ethereum/wiki/Whisper-Overview#proof-of-work

The cost of computing PoW can be regarded as the price you pay for allocated resources if you want the network to store your message for a specific time (TTL). In terms of resources, it does not matter if the network stores X equal messages for Y seconds, […]. So, required PoW should be proportional to both message size and TTL.

However, because calculating PoW on a mobile device leads to increased CPU usage and battery draining, we lowered it to a very small number equal 0.002. This is beneficial from the UX, however, it makes our network an easy target.

So, the question is how can fix that?

Use Dynamic PoW

Each node, base on the available space and load should set its own PoW value. The current Whisper mechanism allows doing that and node’s peers should receive the changed PoW values through the protocol.

At the same time, a Whisper node should not advertise a PoW value that will prevent a received message from one peer to be broadcasted to at least one other peer.

Example: Node A has 3 peers: Peer 1, Peer 2, Peer 3 with PoW values 0.2, 0.15 and 0.5 respectively. It should select a PoW equal or larger than 0.2 because otherwise, if it receives a message from Peer 2, it won’t be able to pass it to any other of its peers.

Questions: is this strategy viable at all? Does it always balance PoW values properly and does not result in a deadlock? Looking at Minimum-cost flow problem solutions might be useful (https://en.wikipedia.org/wiki/Minimum-cost_flow_problem).

A mobile node observers its peers and selects the lowest PoW possible because it does not broadcasts received messages.

Drawback of this approach is that the PoW can get pretty high and it will mean battery drain on a mobile device when sending messages. At the same time, it should never block the communication completely. On the other hand, such a situation should be temporary.

Renting Computing Power

Another idea is to rent computing power by mobile nodes. This requires developing a cryptoeconomic model that will allow the mobile nodes to pay for calculating PoW for their messages by other nodes.

This, of course, can be implemented in conjunction with the dynamic PoW.

Global Node Reputation Instead Of PoW

Instead of calculating PoW, we could build a network based on a totally different mechanism, for example peer reputation. In this model, global trust values are calculated for each node based on their previous activity. Malicious nodes are identified and blocked.

An algorithm called EigenTrust (https://nlp.stanford.edu/pubs/eigentrust.pdf) describes how such a reputation model can be created.

As it may seem much better alternative for mobile devices, let’s not forget that maintaining and updating global reputation matrix can also be computationally intensive and consume a lot of traffic. It also might be problematic for new users. Also, in my opinion, switching to such a system actually creates a completely different protocol.

Looking forward to feedback and other solutions!


For the sake of completeness, @dmitrys is working on a rate-limiting solution that will allow to detect which peers and chats are responsible for the highest traffic. However, it is considered as another protection layer sitting behind PoW and Bloom Filters.


#2

I’d be inclined to use solution 3 or something else as opposed to mucking around with PoW. I’m not convinced PoW is a great spam prevention mechanism from an attacker economics POV, but I could be wrong.

https://www.scuttlebutt.nz/stories/using-trust-in-open-networks.html has some interesting further reading, EigenTrust and other mechanisms. Excerpt:

Policy-based trust. Using policies to establish trust, focused on managing and exchanging credentials and enforcing access policies. Work in policy-based trust generally assumes that trust is established simply by obtaining a sufficient amount of credentials.
Reputation-based trust. Using reputation to establish trust, where past interactions or performance for an entity are combined to assess its future behavior. Research in reputation-based trust uses the history of an entity’s actions/behavior to compute trust, and may use referral-based trust (information from others) in the absence of (or in addition to) first-hand knowledge.

SecureScuttlebutt uses social “following” relations as a base trust-policy. Users choose which feeds to follow, and therefore opt into every messaging partner. This is how SSB controls spam.

Algorithmic Game Theory book (http://algo.cs.uni-frankfurt.de/lehre/agt/material/Algorithmic_Game_Theory.pdf if link works) also has some relevant stuff. Specifically chapter 23 (incentives in p2p systems) and chapter 27 (Manipulation-Resistant Reputation Systems)