Application Performance and UX

thanks Eric for your answer

yes, i wrote it not exactly how i meant, but anyway i tested it with 10x, and it’s from ~60 to ~100 subscriptions by one event are run when many screens are active, because of react-navigation when you leave the screen, subscriptions remain active for it

that’s true, but maybe we could find a way how to run them in a separate tick, just as a low hanging temporary solution, for example split them and dispatch-later

i’ll try to find a time once again to look at that

that would be great!

1 Like

We went through the excruciatingly difficult decision to ditch React Native and move to Swift and Kotlin code bases.

I cannot tell you how much I despise React Native. You cannot even construct a performant table view with 100 cells.

I wish you all the best as you try to speed up Status.

5 Likes

Are they actually computed or just active? Also just like events some subs are fairly quick to compute. For instance if you just patch together a bunch of other subs it’s just going to be a few assoc

I’m wondering if this is more an issue in how react-native is being used?

As someone who has mostly used react-dom, I know this can be an issue where rendering cells for all available data will lead to poor performance over just rendering the cells that come into view. There are solutions to deal with this issue like http://www.reactvirtualized.com/

Have you tried optimizing react’s rendering before migrating away?

We should really get some data on the performance impact of subscriptions vs input params in child components

Here is a comparison I would like to try to verify some assumptions about subscriptions costs:

  • one huge chat subscription which includes the list of all messages down to author related informations and have all the data flowing down to child components (almost what we have now, our current implementation is more hybrid but would do the trick for now)
  • one minimalist chat subscription with a vector of message ids, each message component subscribing to a parametrized subscription that gets the data for the message, author component itself uses its own parametrized subscription.

Difficulty: find the proper way to measure perfs, rendering vs subscription computations

Assumptions:

  • In the first solution we might save a lot of subscriptions but initial subscription might be expensive. Changes in inputs might trigger lots of recomputations. Parents components would rerender when the child changes but this might actually be negligeable.
  • In the second solution there will be a lot of subscriptions but they might not be so expensive and rarely recalculated, also the subscriptions would only be computed when the messages are actually rendered so it might save lots of computations in big chats especially when scrolling is up, where the first solution recomputes everything.
  • having lots of subscriptions takes space but can save lots of time especially if they are rarely recomputed

The chat screen is definitely the place to experiment as it is were the most expensive operations occurs. While I would expect the one big subscription model to be best for other screens, I think in that case the small parametrized subscriptions might be better.

1 Like

This is what happens when user presses wallet tab , times maybe incorrect because it’s develop build and we have overhead for tracing tools, but this trace is for an empty account, for different conditions times might be different, for example if account has a lot of chat, messages , tokens etc, subscriptions calculation may take much more time

:arrow_forward: :event :navigate-to :wallet-stack 65.0 ms
:arrow_forward: :event/handler :wallet-stack 1.0 ms
:arrow_forward: :event/do-fx 44.0 ms
:arrow_forward: :sub/run :…ad-messages-number 0.0 ms
:arrow_forward: :sub/run :…/pending-requests 0.0 ms
:arrow_forward: :sub/run :get :accounts/login 0.0 ms
:arrow_forward: :sub/run :…/settings 1.0 ms
:arrow_forward: :sub/run :get :…/advanced? 0.0 ms
:arrow_forward: :sub/run :get :chats/loading? 0.0 ms
:arrow_forward: :sub/run :…settings/currency 1.0 ms
:arrow_forward: :sub/run :search/filter 0.0 ms
:arrow_forward: :sub/run :wallet 0.0 ms
:arrow_forward: :sub/run :…subs/contacts 0.0 ms
:arrow_forward: :sub/run :…chat.subs/chats 1.0 ms
:arrow_forward: :sub/run :account/account 0.0 ms
:arrow_forward: :sub/run :chain-sync-state 0.0 ms
:arrow_forward: :sub/run :latest-block-number 0.0 ms
:arrow_forward: :sub/run :…-network-uses-rpc? 0.0 ms
:arrow_forward: :sub/run :wallet/all-tokens 0.0 ms
:arrow_forward: :sub/run :…twork-initialized? 0.0 ms
:arrow_forward: :sub/run :prices 0.0 ms
:arrow_forward: :sub/run :peers-count 0.0 ms
:arrow_forward: :sub/run :dimensions/window 1.0 ms
:arrow_forward: :sub/run :mailserver/state 0.0 ms
:arrow_forward: :sub/run :…/request-error? 0.0 ms
:arrow_forward: :sub/run :get :view-id 0.0 ms
:arrow_forward: :sub/run :get :network/type 0.0 ms
:arrow_forward: :sub/run :get :…/editing? 0.0 ms
:arrow_forward: :sub/run :bottom-sheet 0.0 ms
:arrow_forward: :sub/run :get :account/account 0.0 ms
:arrow_forward: :sub/run :get :…/profile 0.0 ms
:arrow_forward: :sub/run :network-status 0.0 ms
:arrow_forward: :sub/run :get :…/profile 0.0 ms
:arrow_forward: :sub/run :sync-state 0.0 ms
:arrow_forward: :event :…ui/pull-to-refresh 72.0 ms
:arrow_forward: :event/handler 71.0 ms
:arrow_forward: :event/do-fx 0.0 ms
:arrow_forward: :event :update-wallet 20.0 ms
:arrow_forward: :event/handler 6.0 ms
:arrow_forward: :event/do-fx 7.0 ms
:arrow_forward: :event :…/on-will-focus :wallet 7.0 ms
:arrow_forward: :event/handler :wallet 1.0 ms
:arrow_forward: :event/do-fx 0.0 ms
:arrow_forward: :sub/run :…/pending-requests 0.0 ms
:arrow_forward: :sub/run :get :accounts/login 1.0 ms
:arrow_forward: :sub/run :…/settings 0.0 ms
:arrow_forward: :sub/run :…ad-messages-number 0.0 ms
:arrow_forward: :sub/run :get :…/advanced? 0.0 ms
:arrow_forward: :sub/run :get :chats/loading? 0.0 ms
:arrow_forward: :sub/run :…settings/currency 0.0 ms
:arrow_forward: :sub/run :search/filter 1.0 ms
:arrow_forward: :sub/run :wallet 0.0 ms
:arrow_forward: :sub/run :…subs/contacts 0.0 ms
:arrow_forward: :sub/run :…chat.subs/chats 0.0 ms
:arrow_forward: :sub/run :account/account 0.0 ms
:arrow_forward: :sub/run :chain-sync-state 0.0 ms
:arrow_forward: :sub/run :latest-block-number 0.0 ms
:arrow_forward: :sub/run :…-network-uses-rpc? 0.0 ms
:arrow_forward: :sub/run :wallet/all-tokens 0.0 ms
:arrow_forward: :sub/run :…twork-initialized? 0.0 ms
:arrow_forward: :sub/run :prices 0.0 ms
:arrow_forward: :sub/run :peers-count 0.0 ms
:arrow_forward: :sub/run :dimensions/window 1.0 ms
:arrow_forward: :sub/run :mailserver/state 0.0 ms
:arrow_forward: :sub/run :…/request-error? 0.0 ms
:arrow_forward: :sub/run :get :view-id 0.0 ms
:arrow_forward: :sub/run :get :network/type 0.0 ms
:arrow_forward: :sub/run :get :…/editing? 1.0 ms
:arrow_forward: :sub/run :bottom-sheet 0.0 ms
:arrow_forward: :sub/run :get :account/account 0.0 ms
:arrow_forward: :sub/run :get :…/profile 0.0 ms
:arrow_forward: :sub/run :network-status 0.0 ms
:arrow_forward: :sub/run :get :…/profile 1.0 ms
:arrow_forward: :sub/run :sync-state 0.0 ms
:arrow_forward: :event :…ate-prices-success {[…], […]} 3.0 ms
:arrow_forward: :event/handler {:SNT {:…171.85}}} 0.0 ms
:arrow_forward: :event/do-fx 1.0 ms
:arrow_forward: :sub/run :…/pending-requests 1.0 ms
:arrow_forward: :sub/run :get :accounts/login 0.0 ms
:arrow_forward: :sub/run :…/settings 0.0 ms
:arrow_forward: :sub/run :…ad-messages-number 0.0 ms
:arrow_forward: :sub/run :get :…/advanced? 0.0 ms
:arrow_forward: :sub/run :get :chats/loading? 0.0 ms
:arrow_forward: :sub/run :…settings/currency 0.0 ms
:arrow_forward: :sub/run :search/filter 0.0 ms
:arrow_forward: :sub/run :wallet 0.0 ms
:arrow_forward: :sub/run :…subs/contacts 0.0 ms
:arrow_forward: :sub/run :…chat.subs/chats 0.0 ms
:arrow_forward: :sub/run :account/account 0.0 ms
:arrow_forward: :sub/run :chain-sync-state 0.0 ms
:arrow_forward: :sub/run :latest-block-number 0.0 ms
:arrow_forward: :sub/run :…-network-uses-rpc? 0.0 ms
:arrow_forward: :sub/run :wallet/all-tokens 0.0 ms
:arrow_forward: :sub/run :…twork-initialized? 0.0 ms
:arrow_forward: :sub/run :prices 0.0 ms
:arrow_forward: :sub/run :peers-count 1.0 ms
:arrow_forward: :sub/run :dimensions/window 0.0 ms
:arrow_forward: :sub/run :mailserver/state 0.0 ms
:arrow_forward: :sub/run :…/request-error? 0.0 ms
:arrow_forward: :sub/run :get :view-id 0.0 ms
:arrow_forward: :sub/run :get :network/type 0.0 ms
:arrow_forward: :sub/run :get :…/editing? 0.0 ms
:arrow_forward: :sub/run :bottom-sheet 0.0 ms
:arrow_forward: :sub/run :get :account/account 0.0 ms
:arrow_forward: :sub/run :get :…/profile 0.0 ms
:arrow_forward: :sub/run :network-status 0.0 ms
:arrow_forward: :sub/run :get :…/profile 0.0 ms
:arrow_forward: :sub/run :sync-state 0.0 ms
:sub/run :portfolio-value 1.0 ms
2 Likes

This is really helpful! Given the pull-to-refresh I would assume this relates to requesting fiat value. @andmironov can you look into a loading state for the wallet fiat amounts or wallet as a whole? That way we can seperate delay in rendering from delay in navigation.

2 Likes

I’m pretty sure FlatList in RN renders only the items it needs to render, and I feel like the chat view is the least of our problems right now. At least I didn’t notice that when I was profiling.

The problem I saw is the clogged JS thread, mostly when a lot of messages arrive. In a native case you just offload this asynchronously to a background thread and deal with the results, but there isn’t an option like that for pure JS implementations.

Even if you make every JS call super efficient, there still will be a threshold at which the JS thread will be clogged. But it needs to be free, because it also dispatches all the events from touches, etc.

So, one of the ideas of fixing that part is to move all the processing of messages to status-go and to only return necessary pre-processed messages window (maybe 100 at a time) that we show. This way we will keep all the message processing, persisting, etc to the background.

There’s already an initiative on that by @adamb, and I’m planning to take it over when he is on vacation.

Btw, @flexsurfer, @yenda I want to discuss that with you :smiley:

4 Likes

Just had call with @anna @igor @andrey to discuss how Core UI should take performance into account as we build new screens and features.

Two main things came out of that:

  1. Per Igor’s post, moving some things to status-go will potentially affect new feature implementation. The Core UI team can build accordingly with @andrey’s leadership.
  2. Determining performance metrics will help the test team enforce relevant acceptance criteria.
    A bottoms-up approach to this is to define a set of critical flows, break them out into steps (e.g. request made, awaiting response, response received), measure how long they take under various conditions, and repeat after improvements. We can then set benchmarks and compare performance across builds. Anna will take the lead on defining critical flows and metrics with Igor.

Unedited call notes

4 Likes

You could use web workers in JS for background threads. https://github.com/devfd/react-native-workers is an example library I just found but there maybe something better. Before going down the path of multi process I would confirm the main thread is indeed the issue and the way its calls are being made are correct (IO bound calls should not be blocking) as multi proc comes with it’s own complexity. WatermelonDB (https://github.com/Nozbe/WatermelonDB) is an example of using workers in JS specifically for localDB reads and writes to take pressure off the main thread.

3 Likes

I think we tried this library a while ago and it had some problems, though it was a year or something since then so maybe it is better now.

Also, I’m really not sure if this library actually switches threads in iOS, I only barely looked at the source code and I couldn’t find anything that creates an additional JS thread or switches to it.

I did profile the app and then the JS thread was indeed clogged and mostly because we had to process each message coming from a mailserver in JS. Theorizing about performance bottlenecks is a pretty unfruitful endeavour.

I think we have an option move down a level some of our code and just go multithreaded when we need that, since we have quite a lot of the logic already implemented there. If we just were a “pure” RN app, it would have make sense to look for a RN library for that. But we already have status-go and some API that we expose, so I think we should use that.

2 Likes

I’m curious to know what goes into processing a message and how long on average does that single operation take?

1 Like

@igor I think there is still one problem with messages, because even though a lot of work was done in order to support batching them, it appears that currently there is still one event dispatched down the line for each message (:message/add event in ::chat-received-message/add-fx side effect). So if someone could look into that that might already be a big part of the clogging when fetching from mailserver

edit: I checked and they are received all at once so I was wrong

1 Like

Prepared PR with subscriptions optimizations, and also some optimization to navigate between main tabs, main wallet screen optimized as well https://github.com/status-im/status-react/pull/8050

3 Likes

Thank you @andrey that is awesome work that we should have done long time ago, kudos!

I have two threads opened to discuss implementation efforts to improve performances:

  • first one in regards to communication protocol in which I describe the steps to take before moving the whole thing to status-go (doing so without doing these steps first sounds like a very bad idea to me) , @igor is planning to look at first step next week. Performances: before moving communication protocol to status-go
  • second one is improving wallet perfs ans should take about a week, my plan is to look at react-side next week, go side is basically the same as for communication protocol. [Proposal] Fix the transactions list
2 Likes

One more picture from me, sorry
22
here you can see why wallet screen is slow, because we (1) request all tokens balances from a chain and (2) request prices from http api, each time we open wallet screen

(1) can be optimized by moving to status-go
(2) maybe we could cache it , and request prices only in some interval

1 Like

@andrey this is already being discussed here [Proposal] Fix the transactions list

2 Likes

How Discord achieves native iOS performance with React Native

1 Like

Upgrade to RN 0.60.5 and enable Hermes JS engine

Hermes JS engine, improves React Native performance by decreasing memory utilization, reducing download size, and decreasing the time it takes for the app to become usable or “time to interactive” (TTI)

Use identicon / gfycat from status-go

Results: We confirmed the performance improvement with the test-team, roughly 45% improvement.

Summary

Moves gfycat and identicon generation to status-go. Names and picture have changed as we use a different algorithm to generate it (linear feedback shift register instead of Mersienne Twister).

Messages and contacts info are pre-generated on message arrival.
The rest of the calls are blocking (visiting profile for example), but no performance degradation is present (they were blocking before as well effectively).

2 Likes

Receive messages perf improvements

Results: When 100 test messages fetching from offline app may freeze for ~1-1.2 sec which is almost three times better than develop!

Summary

A few performance fixes:

  1. Pass a string payload instead of an hex encoded string, to avoid unecessary conversion
  2. Don’t js->clj on messages, as that’s fairly expensive and we can get away without
  3. Don’t use pr-str read-string , rather convert to json

Findings

Looking at the performance of the app when receiving messages through the profiler, we can see that the bottleneck is (as we already know) in the JS thread, which is hogged for the whole time preventing any input from the user to be handled, therefore “freezing” the app.

Looking a bit deeper into it, we can split the workload in two sections (events):

The actual receiving of the raw messages from status-go

status-react/src/status_im/signals/core.cljs

Line 61 in 792df32

“messages.new” (transport.message/receive-messages cofx event-js)

messages.new event.
The processing of the actual Text Messages

status-react/src/status_im/events.cljs

Line 638 in 474ff00

(chat.message/receive-many cofx messages)))

Both events should be handle in 16ms for keeping the ui responsive.
Investigation showed that neither or them is close to that.

With regard of the first event the bottlenecks identified where:

  1. Conversion from js->clj , this was by far the most expensive operation, and it has been addressed by keeping the payload in # js .
  2. Calling hexToUtf8 , this was not necessary as we can just provide the string version from status-go at no cost.
  3. Transit parsing, to a minor extent (not changed)
  4. Spec validation is expensive (not changed)

In the second part of the event (currently heavier then the first part), the most time is spent running the following code (roughly in order of performance):

status-react/src/status_im/chat/models/message.cljs

Line 276 in 474ff00

messages-fx-fns (map add-received-message deduped-messages)

status-react/src/status_im/chat/models/message.cljs

Line 277 in 474ff00

groups-fx-fns (map #(update-group-messages chat->message %) chat-ids)]

status-react/src/status_im/chat/models/message.cljs

Line 264 in 474ff00

chats-fx-fns (map (fn [chat-id]

status-react/src/status_im/chat/models/message.cljs

Line 282 in 474ff00

[(chat-model/update-dock-badge-label)])

How to address

First we should be as already discussed moving most of the code to status-go, that would be the natural first step. Before we do that, it would be though useful to simplify the message handling code and verify that if we ditch some performance improvements (say we don’t use message groups anymore, but we get an already sorted list from status-go), we have still an overall positive effect.

Another thing is that we currently naively save messages in the db (including recalculating groups etc), even if the user is not on that chat (or the message is not in the current pagination window), which is wasteful.

As discussed with @yenda there’s also a fair amount of regex parsing for each message received (this is more problematic as the longer the message is), my suggestion would be to parse messages on the sender side, and send metadata with each message. This might incur in higher bandwidth cost, but has some benefits:

  1. Better perf. on receiving a message
  2. Separate the way a user inputs a message from how it is rendered, allowing us to add styling/features more easily

For example:
A message that is composed as:

Hey @nobody **this** is a www.link.com or check this #channel

Would be sent across the wire as something like (copying twitter api, which does something similar, also making it verbose, but for some fields the bandwidth can be the same, i.e bold , two integers, for asterisks):

{:text "Hey @nobody this is a www.link.com or check this #channel"
  :entities {:mentions [[x,y,pk]], :links [[x,y]], :channels [[x,y,channel]], :bold [[x,y]]}

Changing input from markdown to html for example will not change the metadata associated, and there’s a bit of a better upgrade path (as markings are stripped, if a client for example can’t see bold it won’t see ** , it will just be shown as plain text)
But I guess that’s something for later.

Another thing that we should be doing is to split the workload in manageable chunks.
Currently we just process whatever the amount of messages is received in one go, i.e (200, 300, 500 etc).
This gives us the shortest time to process messages, but it’s detrimental to the user experience as it’s definitely hogging the JS thread and results in the app freezing.
We should probably split off the messages in batches that can be processed in less than 16ms and dispatch after each chunk https://github.com/day8/re-frame/blob/69a3fcd411369fff65404e7d6a924ce1968e2dba/docs/Solve-the-CPU-hog-problem.md , as described.
I have done that but the results were mixed. On a more powerful device, I saw a slow down (but no real freeze) after pulling 700 messages in one go, which was promising, compared to 10 seconds of freezing. It took much longer though to process the messages overall (roughly a minute or so), but there’s room for improvement (see points above). From inspection I noticed that the initial event was mostly under 16ms, while the second event would often go above (~100ms). Different batch sizes were tried, 1 is the one that gave the smoothest experience.

On a slower device though, the UI completely choked as events took much longer than 16ms, so this is something that we should revisit.

TLDR

We should move stuff to status-go, and at the same time simplify and remove computations that can be avoided.

Parse messages in status-go

Summary

enables parsing of messages in status-go.
Currently only a few messages are supported in status-protocol-go.
For now we only enable Message types.
Status-react will conditionally use the parsed version if present.
Eventually this can be moved to a separate signal/different structure,
but for the time being is best to validate with the minimum amount of
changes.

The next step would be handle validation and processing of the field in
status-go, so we can skip saving the message from status-react.

This commit should improve performance of receiving messages from a
chat, although haven’t had time to validate that.

Currently the JSON coming from status-protocol-go is a bit of a mess (underscore fields mixed with kebab case), but will go around making it nicer (maybe swap with protobuf?) once the parsing is fully in

Use whisper-timestamp instead of timestamp & perf improvement

Results: receiving message history with public chat having 100 messages, I also tried even with public having 1000 (in the past it was killing amount for Galaxy Note 4) and no lags when receiving messages. NO LAGS! Less than 0.1 second on slow device

Can navigate fine between screens fine disregard the number of messages are fetching from offline inbox.

Summary

==== Ordering of messages ====

Change the ordering of messages from a mixture of timestamp/clock-value to use
only clock-value.

Datemarks are now not used for sorting anymore, which means that the
order of messages is always causally related (not the case before, as we
were breaking this property by sorting by datemark), but datemark
calculation is unreliable (a reply to a message might have a timestamp <
then the message that is replied to).
So for timestamp calculation we
naively group them ignoring “out-of-order timestamp” messages, although
there’s much to improve.
It fixes an issue whereby the user would change their time and the
message will be displayed in the past, although it is still possible to
craft a message with a lower clock value and order it in the past
(there’s no way we can prevent this to some extent, but there are ways
to mitigate, but outside the scope of this PR).

We keep timestamp as a field as we don’t want to completely rely on whisper for this.

==== Performance of receiving messages ====

The app would freeze on pulling messages from a mailserver (100 or so).
This is due to the JS Thread being hogged by CPU calculation, coupled
with the fact that we always tried to process messages all in one go.

This strategy can’t scale, and given x is big enough (200,300,1000) the
UI will freeze.

Instead, each message is now processed separately, and we leave a gap
between processing each message for the UI to respond to user input
(otherwise the app freezes again).
Pulling messages will be longer overall, but the app will be usuable
while this happen (albeit it might slow down).
Other strategies are possible (calculate off-db and do a big swap,
avoiding many re-renders etc), but this is the reccommended strategy by
re-frame author (Solving the CPU Hog problem), so sounds like a safe
base point.

The underlying data structure for holding messages was also changed, we
used an immutable Red and Black Tree, same as a sorted map for clojure, but we use
a js library as is twice as performing then clojure sorted map.

We also don’t sort messages again each time we receive them O(nlogn), but we
insert them in order O(logn).

Other data structures considered but discarded:

  1. Plain vector, but performance prepending/insertion in the middle
    (both O(n)) were not great, as not really suited for these operations.
  2. Linked list, appealing as append/prepend is O(1), while insertion is
    O(n). This is probably acceptable as messages tend to come in order
    (from the db, so adding N messages is O(n)), or the network (most of
    them prepends, or close to the head), while mailserver would not follow this path.
    An implementation of a linked list was built, which performed roughtly the
    same as a clojure sorted-map (although faster append/prepend), but not
    worth the complexity of having our own implementation.
  3. Clojure sorted-map, probably the most versatile, performance were
    acceptable, but nowhere near the javascript implementation we decided on
  4. Priority map, much slower than a sorted map (twice as slow)
  5. Mutable sorted map, js implementation, (bintrees), not explored this very much, but from
    just a quick benchmark, performance were much worse that clojure
    immutable sorted map

Given that each message is now processed separately, saving the chat /
messages is also debounced to avoid spamming status-go with network
requests. This is a temporary measure for now until that’s done directly
in status-go, without having to ping-pong with status-react.

Next steps performance wise is to move stuff to status-go, parsing of
transit, validation, which is heavy, at which point we can re-consider
performance and how to handle messages.

Fixes also an issue with the last message in the chat, we were using the
last message in the chat list, which might not necessarely be the last
message the chat has seen, in case messages were not loaded and a more
recent message is the database (say you fetch historical messages for
1-to-1 A, you don’t have any messages in 1-to-1 chat B loaded, you receive an
historical message for chat B, it sets it as last message).

Also use clj beans instead of js->clj for type conversion

There’s still a few things to do (update desktop dependencies), and I am still going through some regression tests, as the impact of the pr is quite large (sorry for the large pr, I can split it if you’d like me to do, but without an holistic approach is difficult to check whether it was worth it), but it’s ready for review.

3 Likes