How to deal with spam in public chats?

Currently, many of the public chats in Status are bombarded with spam. But why is this a problem for Status, and not many other chat applications? The foremost solutions to spam generally lead to contradictions with our principles. Specifically, solutions to spam in Status should be...


This is a companion discussion topic for the original entry at https://our.status.im/how-to-fix-spam-in-public-chats/

Remove the hardcoded list of public chats to incentivise people to create their own instead of all gathering around a dozen ones that can be spammed. It’s much harder to spam when your attack surface is massive and you don’t know where everyone is.

1 Like

From my experience with Sphinx Chat I see nothing in the way of spam. In order to create your own chat with your rules you need to run a node, your node - your rules. Most nodes have a per message charge, and the chat itself has a price to join. So, how about your chat-your rules? Allow the implementation of the rules by the person/people that want to develop the group. This allows for a principle of freedom of association.

I’ve sent a PR to remove the list of public chats, but you can still create your own public chats. This will help distribute users and make spamming more difficult since there won’t be a centralized area where users get together.

There is already a groups and communities feature, and the owners of those can set up their own rules, and kick/ban anyone they want from that specific group/community. As far as public chats, removing the hardcoded list is much more effective than removing public chats altogether.

2 Likes

No filtering based on identity would ever work, since creation of new chat keys is trivial and cheap, so every message a spammer generates can have it’s own unique public chat key.

As far as I can tell there is only one solution that works:
Communities/groups moderated by owners and trusted members.

The only other option is some simulacrum of that chain of trust created by humans created based on interactions the given user/identity has within the network, and whether they are positive or negative(like emoticons used), but that would require a massive and complex monitoring apparatus, so not really a solution we want.