So this morning the Status app (in the #status channel) has been seeing some spam attacks:
Each one of these super-long messages was sent by a separate account, and so to get rid of the spam currently you have to block each one of these accounts individually.
This is a challenging attack to deal with in Status, because account creation is so cheap (you can generate as many accounts as you want just by generating public/private key pairs) and there’s no collective moderation of any kind, centralized or decentralized. Indeed, it shows how “literally anyone can post anything” is not a stable equilibrium in an online context (unlike in the real world, where how much you can yell is limited by the reach of your own voice), and so some form of limitation at the very least to prevent spam is necessary. But I am very confident that this can be done in a decentralized and freedom-preserving way.
I posted a few thoughts on twitter but wanted to expand in longer form here.
I see a few classes of solutions to this problem. Each solution could be used locally by individual chats or individual users; in general I oppose global solutions that get forced on everyone, rather local approaches that can adjust to different circumstances are better.
- Chats with whitelisted participation (anyone can read, but only a moderator, or perhaps for advanced users a smart contract on ethereum, selects who has the right to send messages)
- Economic rate-limiting: to post messages you have to either have >=100 SNT tokens locked up, or have an ENS name (which requires either SNT tokens locked up or paying ETH, and at the very least an on-chain ethereum transaction that pays fees)
- Non-economic rate limiting: to post messages you have to prove your identity by eg. pointing to a brightid
- Shared block lists: you have the ability to subscribe to lists of accounts blocked by other users, and automatically not see their messages either
- Chats with cryptoeconomic collective moderation: basically ideas from https://ethresear.ch/t/prediction-markets-for-content-curation-daos/1312, where anyone can flag a post by putting up stake, and (possibly after multiple rounds of escalation) a DAO decides whether or not flagged posts deserve to be invisible and based on the decision either rewards or penalizes the flagger.
Note that (2), (3) and (4) do not need to be implemented system-wide or even chat-wide; it should be a user’s individual choice to click a button that says eg. “only read messages from users who have an ENS name in this chat”. Shared block lists are also user-driven by definition.
And in general, (2), (3) and (4) are my favorite options for this reason; if the goal is to preserve decentralization, the more local the decision-making the better. They are also the easiest to implement. That said, in some circumstances, chats with explicit moderation are a desirable option as well.
The different solutions can be used in combination; shared block lists are useless if an attacker can just keep creating a totally new account every second, but if creating accounts is expensive then sharing block lists multiplies the cost an attacker needs to pay to force each user to click the “block” button a given number of times.
Using ZK tech for privacy
Additionally, (2) and (3) can be done in a privacy-preserving way by combining ZK technologies: you can zero-knowledge-prove that you have some account where >=100 SNT tokens are locked up, without revealing which one. ZKP technologies tend to break in the soundness direction rather than the privacy direction (ie. if the ZKP scheme turns out to be broken, it’s very unlikely that you lose privacy, what would usually happen is that whoever breaks the scheme will just be able to spam more), so it’s a fairly safe path to take.
ZK tech is in 2020 finally getting to the point of being mature enough to do this; proofs on a phone are possible to generate in ~2 seconds, and this could be done in <0.5 seconds if we’re willing to sacrifice on parameters and/or use newer and faster-but-more-risky hash functions inside the ZKP (IMO a correct tradeoff). And I think Status actually has a big opportunity here, in showing the world how real problems like spam that all messengers have can be dealt with, without sacrificing ideals of open access, pseudonymity and privacy.
Conventional solutions probably won’t work
I dislike the idea of going down the “conventional” path of built-in content-based spam filtering. A big reason is that content-based spam filters need to be closed-source, or if they are open source it’s easy for attackers to see how to bypass them. But even if closed-source is deemed acceptable, GPT2-generated spam can probably still bypass the filters quite easily.
Additionally, “the conventional path” doesn’t really align well with the design goal (or at least, a design goal I think is valuable) of pushing decision-making out to the edges as much as possible, whereas the rate-limiting based solutions do, and are more long-term defensible.