Switching between chats on desktop (mac) and iOS has felt delayed compared with other messengers I have grown used to. On desktop clicking on to another room or conversation takes nearly 1 second from click to updated view.
On iOS it’s also nearly 1 second before the conversation is visible. I noticed on something like whats app the conversation is already rendered when the animation is sliding in, while on Status the window is blank during the animation. Practically one can’t really read much while it’s sliding in, but it does make it seem faster if the conversation is rendered in the window sliding in. I also think the slide in animation speed on Status is slower than on messengers like whats app.
I think it was a smart move by whats app to make the slide in animation slightly faster than the iOS default messages app because it makes it feel like whats app is faster than the default messages app.
This is fairly shocking. I’m having much better load times on iOS, though if I haven’t synced in some time it’s much slower. But when I open a conversation, for instance, it takes less than 1 second.
I do have issues with chats syncing out of order, which is disorienting when switching devices.
the 9+ icons are essentially meaningless to me
The issue with 9+ icons for unread messages is on the front-end Core UI backlog and meant to be tackled in Q2 as part of the mark all as read feature, which would allow users to pick up where they left off in a public channel and then jump to bottom. The problematic behaviour for me there is that the 9+ badges do not clear properly once a conversation has been viewed. Is that also what you’re referring to @jarradhope?
so the device I run these examples is a Huawei M5 8inch Tablet (yes I know tablets aren’t supported its just an android phone with a larger screen calm yo self)
It has a HiSilicon Kirin 960 processor, 4GB memory and a Mali-G71 MP8 GPU, state of the art in 2016. I`m not sure how useful the old/new distinction is considering our ui complexity, processor should be fine for decryption) and every other app performs wonderfully on it.
I have ~120 chats, and 10 dapps. and 100mbps download speed.
I think we will start with the cold startup and mostly focus on Android, looking at the numbers.
Also this slowness of syncing looks very suspicious, but it might be mitigated by switching protocol parsing to status-go (but that needs to be confirmed).
And I guess we will start with replicating the conditions, making a test account with around ~120 chats and similar specs like Huawei M5. I think we have a bot that can make a decent history for these chatrooms.
As you can see on the picture, we have one js thread and when a user presses a button, we:
register re-frame event in a queue
run all events in a queue
calculate ALL active subscription functions (currently ~100!!)
calculate all active render functions
parse hiccup convert to js
Low hanging fruits
inspect all subs on performance
make more efficient subscriptions tree
find out why we calculate not visible render functions and subscriptions (i believe it’s because of react-navigation)
inspect events, optimize or split
move events and subs in separate js thread, because of re-frame arch it should be doable, so in main thread will be only user action events
find out how realm impacts performance, because we need to convert message data many times, and save it encrypted in realm, i believe it would be much more efficient if we could store messages with status-go
Currently, we react on any change in app-db, and there maybe subscriptions with calculations, so we need to improve our subscriptions graph,
first node must be map only with keys which impact subscriptions, next, one subscription for each key in this map, and next subscriptions related to these keys
re-frame has a solution for events. it suggests to split calculations in small functions, and we used this before, but for some reason, we decided to use our own solution with fx/merge where we merge all small event functions in a big one, and this may lead to performance issues, for example, if you’ve received a lot of messages you need to filter them, sort etc, save in realm, all in one function, maybe we could find a compromise here and split it for example for such events like messages
Currently, we calculate all screens even if they are not displayed, maybe we could find a way when we could keep them in react but don’t calculate subscriptions and render functions for them
PS: 5 steps above happen for each event, for example when you type a text in the input field, for each char we calculate all subscriptions and views renderers
I am not sure about this, subscriptions that take other subscriptions as input should only render if those subscriptions have changed afaik
that would be a nice effort, similar to events, I would be in favor of putting all subscriptions to first level keys in app-db in one file and use them as input for other more complex subscriptions.
such an effort would also include diagnosing poor design of app-db with top level keys that should rather be nested to reduce amount of recomputations
I disagree with splitting events, more on this in response to your other paragraph.
while I agree with this, the simplest and more efficient solution might actually end up being the current effort to move chat protocols to status-go. if status-go stores messages and returns them per chat and in order with a comprehensive API we could avoid the most expensive computations on status-react side.
already commented above, I think we are thinking roughly about the same thing
strongly disagree with that statement, fx/merge is about
transactional events which cancels out if an error occurs instead of partially changing app state as it would happen with a small chain of events, they also avoid unnecessary recomputation of subscriptions.
the fx/merge actually often do little work and are mostly about returning a map of effects, which are going to do the actual work. for instance saving in realm is done outside of the event.
Keep in mind that there is overhead to dispatching events, events dispatch within another event only run next tick, but those dispatched at the same time all render during the same tick. After events are computed there is a check to app-db. fx/merged is saving a lot of time by merging changes to realm as well.
Now sorting and filtering the messages especially in big chats is a big deal and not really related to fx/merge, it will take 99% of the computation time regardless of how much fx/defn are merged within the event. So the actual effort should rather be put into finding a solution to that problem, either by offloading this work to another thread (but it has its own complications), using status-go for that, or putting more effort into alternative datastructures such as the ordered persistent hashmap which could be leveraged for more caching and less sorting.
(parenthesis re-frisk really doesn’t like custom datastructures and my repl is often wrecked by millions of lines of exceptions such as
I think relying more on local state especially for input fields would already save us a lot of subscriptions and events, we can pass a validation function for those fields that require some validation, it doesn’t have to be done through an event
Overall I would be very much in favor of having only one subscription per screen, which is composed of subscriptions with a focus on minimizing recomputations
yes, i wrote it not exactly how i meant, but anyway i tested it with 10x, and it’s from ~60 to ~100 subscriptions by one event are run when many screens are active, because of react-navigation when you leave the screen, subscriptions remain active for it
that’s true, but maybe we could find a way how to run them in a separate tick, just as a low hanging temporary solution, for example split them and dispatch-later
i’ll try to find a time once again to look at that
I’m wondering if this is more an issue in how react-native is being used?
As someone who has mostly used react-dom, I know this can be an issue where rendering cells for all available data will lead to poor performance over just rendering the cells that come into view. There are solutions to deal with this issue like http://www.reactvirtualized.com/
Have you tried optimizing react’s rendering before migrating away?
We should really get some data on the performance impact of subscriptions vs input params in child components
Here is a comparison I would like to try to verify some assumptions about subscriptions costs:
one huge chat subscription which includes the list of all messages down to author related informations and have all the data flowing down to child components (almost what we have now, our current implementation is more hybrid but would do the trick for now)
one minimalist chat subscription with a vector of message ids, each message component subscribing to a parametrized subscription that gets the data for the message, author component itself uses its own parametrized subscription.
Difficulty: find the proper way to measure perfs, rendering vs subscription computations
In the first solution we might save a lot of subscriptions but initial subscription might be expensive. Changes in inputs might trigger lots of recomputations. Parents components would rerender when the child changes but this might actually be negligeable.
In the second solution there will be a lot of subscriptions but they might not be so expensive and rarely recalculated, also the subscriptions would only be computed when the messages are actually rendered so it might save lots of computations in big chats especially when scrolling is up, where the first solution recomputes everything.
having lots of subscriptions takes space but can save lots of time especially if they are rarely recomputed
The chat screen is definitely the place to experiment as it is were the most expensive operations occurs. While I would expect the one big subscription model to be best for other screens, I think in that case the small parametrized subscriptions might be better.
This is what happens when user presses wallet tab , times maybe incorrect because it’s develop build and we have overhead for tracing tools, but this trace is for an empty account, for different conditions times might be different, for example if account has a lot of chat, messages , tokens etc, subscriptions calculation may take much more time
This is really helpful! Given the pull-to-refresh I would assume this relates to requesting fiat value. @andmironov can you look into a loading state for the wallet fiat amounts or wallet as a whole? That way we can seperate delay in rendering from delay in navigation.
I’m pretty sure FlatList in RN renders only the items it needs to render, and I feel like the chat view is the least of our problems right now. At least I didn’t notice that when I was profiling.
The problem I saw is the clogged JS thread, mostly when a lot of messages arrive. In a native case you just offload this asynchronously to a background thread and deal with the results, but there isn’t an option like that for pure JS implementations.
Even if you make every JS call super efficient, there still will be a threshold at which the JS thread will be clogged. But it needs to be free, because it also dispatches all the events from touches, etc.
So, one of the ideas of fixing that part is to move all the processing of messages to status-go and to only return necessary pre-processed messages window (maybe 100 at a time) that we show. This way we will keep all the message processing, persisting, etc to the background.
There’s already an initiative on that by @adamb, and I’m planning to take it over when he is on vacation.
Btw, @flexsurfer, @yenda I want to discuss that with you
Just had call with @anna@igor@andrey to discuss how Core UI should take performance into account as we build new screens and features.
Two main things came out of that:
Per Igor’s post, moving some things to status-go will potentially affect new feature implementation. The Core UI team can build accordingly with @andrey’s leadership.
Determining performance metrics will help the test team enforce relevant acceptance criteria.
A bottoms-up approach to this is to define a set of critical flows, break them out into steps (e.g. request made, awaiting response, response received), measure how long they take under various conditions, and repeat after improvements. We can then set benchmarks and compare performance across builds. Anna will take the lead on defining critical flows and metrics with Igor.
You could use web workers in JS for background threads. https://github.com/devfd/react-native-workers is an example library I just found but there maybe something better. Before going down the path of multi process I would confirm the main thread is indeed the issue and the way its calls are being made are correct (IO bound calls should not be blocking) as multi proc comes with it’s own complexity. WatermelonDB (https://github.com/Nozbe/WatermelonDB) is an example of using workers in JS specifically for localDB reads and writes to take pressure off the main thread.
I think we tried this library a while ago and it had some problems, though it was a year or something since then so maybe it is better now.
Also, I’m really not sure if this library actually switches threads in iOS, I only barely looked at the source code and I couldn’t find anything that creates an additional JS thread or switches to it.
I did profile the app and then the JS thread was indeed clogged and mostly because we had to process each message coming from a mailserver in JS. Theorizing about performance bottlenecks is a pretty unfruitful endeavour.
I think we have an option move down a level some of our code and just go multithreaded when we need that, since we have quite a lot of the logic already implemented there. If we just were a “pure” RN app, it would have make sense to look for a RN library for that. But we already have status-go and some API that we expose, so I think we should use that.