Longer-form thoughts on DoS / spam prevention

So this morning the Status app (in the #status channel) has been seeing some spam attacks:


Each one of these super-long messages was sent by a separate account, and so to get rid of the spam currently you have to block each one of these accounts individually.

This is a challenging attack to deal with in Status, because account creation is so cheap (you can generate as many accounts as you want just by generating public/private key pairs) and there’s no collective moderation of any kind, centralized or decentralized. Indeed, it shows how “literally anyone can post anything” is not a stable equilibrium in an online context (unlike in the real world, where how much you can yell is limited by the reach of your own voice), and so some form of limitation at the very least to prevent spam is necessary. But I am very confident that this can be done in a decentralized and freedom-preserving way.

I posted a few thoughts on twitter but wanted to expand in longer form here.

Classifying solutions

I see a few classes of solutions to this problem. Each solution could be used locally by individual chats or individual users; in general I oppose global solutions that get forced on everyone, rather local approaches that can adjust to different circumstances are better.

  1. Chats with whitelisted participation (anyone can read, but only a moderator, or perhaps for advanced users a smart contract on ethereum, selects who has the right to send messages)
  2. Economic rate-limiting: to post messages you have to either have >=100 SNT tokens locked up, or have an ENS name (which requires either SNT tokens locked up or paying ETH, and at the very least an on-chain ethereum transaction that pays fees)
  3. Non-economic rate limiting: to post messages you have to prove your identity by eg. pointing to a brightid
  4. Shared block lists: you have the ability to subscribe to lists of accounts blocked by other users, and automatically not see their messages either
  5. Chats with cryptoeconomic collective moderation: basically ideas from https://ethresear.ch/t/prediction-markets-for-content-curation-daos/1312, where anyone can flag a post by putting up stake, and (possibly after multiple rounds of escalation) a DAO decides whether or not flagged posts deserve to be invisible and based on the decision either rewards or penalizes the flagger.

Note that (2), (3) and (4) do not need to be implemented system-wide or even chat-wide; it should be a user’s individual choice to click a button that says eg. “only read messages from users who have an ENS name in this chat”. Shared block lists are also user-driven by definition.

And in general, (2), (3) and (4) are my favorite options for this reason; if the goal is to preserve decentralization, the more local the decision-making the better. They are also the easiest to implement. That said, in some circumstances, chats with explicit moderation are a desirable option as well.

The different solutions can be used in combination; shared block lists are useless if an attacker can just keep creating a totally new account every second, but if creating accounts is expensive then sharing block lists multiplies the cost an attacker needs to pay to force each user to click the “block” button a given number of times.

Using ZK tech for privacy

Additionally, (2) and (3) can be done in a privacy-preserving way by combining ZK technologies: you can zero-knowledge-prove that you have some account where >=100 SNT tokens are locked up, without revealing which one. ZKP technologies tend to break in the soundness direction rather than the privacy direction (ie. if the ZKP scheme turns out to be broken, it’s very unlikely that you lose privacy, what would usually happen is that whoever breaks the scheme will just be able to spam more), so it’s a fairly safe path to take.

ZK tech is in 2020 finally getting to the point of being mature enough to do this; proofs on a phone are possible to generate in ~2 seconds, and this could be done in <0.5 seconds if we’re willing to sacrifice on parameters and/or use newer and faster-but-more-risky hash functions inside the ZKP (IMO a correct tradeoff). And I think Status actually has a big opportunity here, in showing the world how real problems like spam that all messengers have can be dealt with, without sacrificing ideals of open access, pseudonymity and privacy.

Conventional solutions probably won’t work

I dislike the idea of going down the “conventional” path of built-in content-based spam filtering. A big reason is that content-based spam filters need to be closed-source, or if they are open source it’s easy for attackers to see how to bypass them. But even if closed-source is deemed acceptable, GPT2-generated spam can probably still bypass the filters quite easily.

Additionally, “the conventional path” doesn’t really align well with the design goal (or at least, a design goal I think is valuable) of pushing decision-making out to the edges as much as possible, whereas the rate-limiting based solutions do, and are more long-term defensible.


Thanks for the input, I agree overall with your suggestions.

There are some proposals we were discussing, and as the demand came, it probably will be implemented soon.

  1. This is planned for “Community Rooms”, which could be administrated by the creator of the room (or a governance set as the owner). See this topic Organization channels ;
  2. & 5. There is a suggestion I’ve written when the first spams appeared, which were not disrupting the expirience (just someone saying bad words in chat), which would allow the clients to set a minimum of SNT required for displaying the messages. The idea is that chat can still be free, but in some cases (such as when there is too much messages) the users could reduce the amount of messages by rising the minimum SNT in a room. See this topic Visibilty Stake for Public Chat Room Governance ;
  3. This is a good idea to also have, it’s very straight foward to implement and require minimal changes;
  4. I was thinking of something similar, which would also allow the block list would be propagated through a “viscous democracy”. See this post Friend-to-Friend Content Discovery & Community Feeds

All of this sure should be anonymized through ZK, as we don’t want to leak the wallet address for each chat identity.


Glad to see that the existing discussions are already taking these paths!

For (1) I would love to see “by a person or group” optionally implemented via “by an ethereum contract”; it seems like it would be a nice way to allow the community itself to experiment with lots of different approaches in the future. Though of course tx fees are a challenge, rollups are not here yet etc etc.


I’ve found a few older posts related to fighting spam on the ethresear.ch forum,
some are more Ethereum specific but this one seems to be applicable to Status:

There is also recent paper reviewing decentralized “proof-of-personhood” approaches https://arxiv.org/abs/2008.05300

Who Watches the Watchmen? A Review of Subjective Approaches for Sybil-resistance in Proof of Personhood Protocols
Divya Siddarth, Sergey Ivliev, Santiago Siri, Paula Berman
Most current self-sovereign identity systems may be categorized as strictly objective, consisting of cryptographically signed statements issued by trusted third party attestors. This failure to provide an input for subjectivity accounts for a central challenge: the inability to address the question of “Who verifies the verifier?”. Instead, these protocols outsource their legitimacy to mechanisms beyond their internal structure, relying on traditional centralized institutions such as national ID issuers and KYC providers to verify the claims they hold. This reliance has been employed to safeguard applications from a vulnerability previously thought to be impossible to address in distributed systems: the Sybil attack problem, which describes the abuse of an online system by creating many illegitimate virtual personas. Inspired by the progress in cryptocurrencies and blockchain technology, there has recently been a surge in networked protocols that make use of subjective inputs such as voting, vouching, and interpreting, to arrive at a decentralized and sybil-resistant consensus for identity. In this article, we will outline the approaches of these new and natively digital sources of authentication – their attributes, methodologies strengths, and weaknesses – and sketch out possible directions for future developments.

Edit, adding the main figures:


I like ideas (2) and (5). Gotta hit the spammers were it hurts, and that’s usually their wallet.

A combination of both might be best:

  1. Users stake SNT in order to participate in public chats
  2. Other users may report questionable messages
  3. Users can opt in to participate in moderation, if they do, they’ll get sent flagged messages in order to vote on them
  4. In order to keep moderators honest, they’ll also have to stake SNT for each vote
  5. If a quorum is reached, the offender is penalized by forfeiting some of their stake
  6. Each voting moderator in support of the decision gets their stake back + a share of the penalties
  7. Moderators who voted against the decision also forfeit their stake to those in favor

This would achieve the following benefits:

  1. Economically disincentivizes spam
  2. Decentralized and democratic voting system avoids censorship by a single authority
  3. Honest users can earn their stake back by participating in moderation

The question would be whether to cap the moderator earnings at the size of the initial stake. On one hand, allowing users to earn money for helping reduce spam would ensure enough people keep doing it even after their initial deposit has been returned. On the other hand, it might also incentivize someone to build a large bot farm and use AI techniques to automatically flag spam. Of course, this would only be of concern if they manage to control a majority of the votes, because at that point they could start arbitrarily censoring the chat.

On the other hand, even if moderation is entirely run by people, a majority could potentially be achieved by means of external communication. However, as long as this system is implemented on a per-channel basis, it would likely not be a big problem, because if one channel’s conversation becomes controlled by a single entity, users could simply migrate to a different channel.


In regards of ZK-Snarks for Economic rate-limiting

  1. The user burns SNT into a secret data (chatid + chatid private key) hash to a smart contract, which is marked in a sparse merkle tree
  2. The user generates a proof that inside the merkle tree stored in the smart contract, it knows a deposit to his chatid, without revealing the secret data hash (so it’s not traceable back to which wallet address made the deposit)
  3. Anyone can verify that a chatid have a deposit by checking the merkle root, the chat id, and the proof.
  • This is analogous to tornado.cash, but the nullifer is replaced by the chatid.
  • The proofs are validated off chain by the users.
  • It’s not possible to withdraw after a deposit (it’s not like tornado cash), it would be hard to allow withdraws because the proofs would have to be tied to a chatid.
  • Secret data don’t needs to use private key, can be a derivative path of the private key, or it can be chatid + random number, but using random number requires additional storage. Using private key or a derivitive from it, only requires user to know its own private key.
  • The deposits for all users have to be the same value, 1 SNT, 10 SNT, 100 SNT, 1000 SNT, 10000 SNT, etc.
  • I estimate the cost of a deposit around 1,050,000 gas ~ 0.105 ETH in 100 gwei

For Economic rate-limiting if we are not having ZK Snarks and going to use public deposits into a chatid, than we might not even need a deposit at all, instead use directly Wallet Address as Userid and it’s balance as rate-limiting:

  • User wallet sign a message with chatid which can message in its behalf.
  • Other users can block this Wallet address from messaging.
  • Visibility is measured in how much SNT an account have in it’s balance.

Many cool proposals have been made since last weeks incident, I want to compile a subjective and incomplete summary here and how pieces could fit together:

Preventing spam while being censorship-resistant, decentralized, and privacy-preserving? An open, ecosystem driven market for content moderation.


  • Spam ⊂ content moderation; Spam is just one form of “unappreciated” content. Harmful content comes in many shapes and colors.
  • Content quality is subjective: Some people appreciate nudity less than others.


Spam deletion/content moderation is technically equivalent to censorship. Together with decentralization and privacy, censorship resistance is a core value of Status that makes it hard for the Status org to tackle the problem without falling short of these values.
I’m tempted to write: “In an ideal decentralized, anonymous and censorship-resistant network, content moderation is impossible.” Oh no, I just did.

Content moderation cases can become controversial; Why and how the consent “emerged” that we don’t want to have specific messages in a channel should be considered. Imo right technology does not impose arbitrary rules upon its users but instead remains neutral.
Up until recently, centraland just ignored the problem due to the nasty inherent political implications content moderation might bring with it. At some point, an ignored issue catches up, which explains the previous turnarounds of e.g., Twitter and Facebook.

Status is set up inherently different and able to moderate means of self-moderation instead of falling for old school content moderation. Standard solutions deployed by centralized platform providers are just not good enough.

This post aims to explore a market-driven, decentralized, and privacy-preserving content moderation framework.

what does a content moderation extension (cmdex) look like?

Roughly, I could think of two types of cmdex; local extensions that live on the user’s device enforcing user preferences and extensions that talk to waku nodes via e.g., a webhooks. s.t.:

a) traffic is filtered before it hits mobile devices

b) an organization using Status can run a node, where the org’s contributors can connect only to the node that enforces their org’s content moderation.

Establishing consent among waku nodes about content moderation for specific topics might become non-trivial due to waku’s permissionless nature. However, particular nodes can advertise themselves for specific content moderation policies to their users in the future.

cmdex vs. settings

I could think of two options on how Status users choose what content not to be exposed to; settings and extensions. Roughly, I describe settings as core features that the user should be able to configure without getting lost in a configuration jungle; they should be optional, where switching all options off corresponds to “free speech” Status.
On the other hand, extensions are more complicated features that might rely on third parties, which might only be useful to a subset of Status users. Another helpful way I found to separate the two categories was to leave things to the market that might compromise Status’ principles.

Following, a subjectively commented list of content moderation ideas sorted along the settings/extensions axis:


a.k.a users are free to trade SNT (+ potentially some privacy) to third parties for a more pleasant user experience.

identity stake (brightid, iden3 extensions)

Identity stake was one example motivating this post; I don’t think Status should choose whether users require OAuth, Keybase, or something else to join a channel/contact each other.

governance based (kleros extension)

Similar ideas are applied to keep gaming communities sane at scale, but Status shouldn’t be the only judge.

automated content moderation (perspectiveapi extension)

Whether or not users are willing to train a google model with their channel messages to auto-moderate the “tone” in their dapp chat or channel should be a conscious decision by a channel admin.

Content rule filters, Message metadata-based filters

This will be a nice, market-driven cat and mouse game between spammers and rule-makers that Status can participate as a seller.

Local machine learning

I think this is a promising direction to look into, especially to train the app on subjective user preferences.

Differentially private cross-network training: https://github.com/pytorch/opacus, https://github.com/Microsoft/EdgeML


a.k.a part of the status code base, so privacy-preserving/zero-knowledge is a hard requirement

economic rate limiting

Economic rate-limiting will separate Status into two different UXs; an “internet of value Status” and the “internet of trolls Status.” It comes at a cost in accessibility, but will make for an exciting experience to switch between the two.

toggle tribute to talk

Tribute to talk will make for a great night/do not disturb mode.

Read-only community rooms/organization channels

Telegram does that; imo it’s a web1.0, unidirectional, shout to the masses, or write a diary experience.

contact applications/requests, allow messages from contacts only

These seem very useful.

Settings come with a not so easy privacy/performance tradeoff; If settings are implemented with, for instance, signed content preferences generated locally but executed in waku/Status-go, waku nodes will learn about user preferences and user’s social graph. One could zk-prove that one has an ens name without revealing it, but I don’t see a performant way to implement privacy preserving verification without running a node. Hopefully, there is one. Also, with remote filtering, users have a privacy incentive to run their own waku nodes.

On the other hand, if too much traffic will reach mobile devices and is filtered there, performance and battery life will degenerate.

market-driven content moderation

An extension marketplace is where developers, users, and channel maintainers match to buy​ and sell content moderation extensions that filter out content that they dislike in their respective channels.

Some perks of the above concerning content moderation:

  • given liquidity, a market is more efficient then we are; I do look forward to a prolonged argument about the amount of nudity we consider spam, but we should probably not have it.

  • Status is not the content moderator to rule them all; a consent among < 100 contributors is hardly be a reasonable sample to decide. As far as I’m aware, there’s neither a devoutly religious person nor a porn actor among Status contributors. We probably will have a hard time deciding what’s right for our users and whats not.

  • Spam is dynamic, so are markets; Spam detection is short-lived, content moderation evolves. Scammers/spammers consistently invest time to circumvent spam detection. A market includes the whole ecosystem into the job to keep the network sane.

  • Users that own what they run are who we build Status for. Why should we, out of a reflex, now start deviating from that design philosophy? A content moderation market is a place where we can point users to, as they should govern what they do to their devices and what their devices do to them.

Related posts: