Modularisation, DataFlow and State Handling

Peter Szucs
Source Diving
Published in
8 min readJun 19, 2018

--

Introduction

I’m Peter and I work for Cookpad as a Senior Android Developer. For a long time now I really wanted to put down my thoughts about how a good application architecture looks like. Then Google IO 2018 happened and there are a bunch of whole new things that might just cross out the things that I wanted to share. I should probably watch everything related on Youtube, read all the articles and then finally rethink every bit of this idea whether it is still awesome.

Well, you know what? I do believe whatever Google came up with is still a good idea, although not even close to mine unfortunately, me and my ex-colleague used mainly Paco and Jake Wharton RxState idea plus added some more stuff and here we are.

Also, have you got this feeling that there are many new things when you write an Android application? In the recent years Google who is quite someone on the playground started to be opinionated about things that until now they didn’t really care. You could use whatever architecture (or just generally do whatever) you wanted. Then suddenly they have suggestions about all the things.

These things are there for us so we can kick-off a project in a super fast and easy way following the Google guidelines. I believe that we are going in a good direction. Finally developer groups don’t have to argue for weeks which M-V-Star Architecture should be used and whether they want to do X or Y things. Obviously, I’m not saying every project and person have to blindly follow Google but it is helping and quickens the decision making process based on my experience.

On the other hand Google didn’t invent the wheel again, they picked up some technologies they believed that can serve the most android use-cases. Many of these technologies are not even close to new, they have been there being used in Android apps (and community) for many years, just to mention one example: RxLifecycle.

The Content

I’d like to write about especially 2 things now:

  • Modular Architecture — Horizontal and Vertical slices
  • DataFlow and State

I’ve also used dependency injection (in this case Dagger) to connect the modules and provide dependencies.

Modular Architecture

We can take an app and divide it into modules for several reasons. I don’t really want to go into details on the ‘why’ side, but I do believe that dividing up a monolith app can have several benefits, among others:

  • The devs who are working on the app can have less conflicts
  • Smaller teams can have ownership of the features modules
  • And of course to get a better build speed.

There are talks on youtube even from Google and many articles around this topic, that modularised apps can gain significantly better build times, especially if we use some kind of annotation processing.

  • Encapsulation, dependency visibility

For example if a developer in an UI related module wants to access retrofit then it just won’t happen as it does not have that dependency. And I know you can just add the dependency, but still it’s a step that has to be taken and it already warns you that maybe you are on the wrong path.

Every app can be divided in 2 ways:

  • extract purely technical modules, like networking
  • and feature modules that are using the technical ones.

I like to call these as horizontal and vertical modules although I’d like to emphasise that dependency arrows are always going in one direction which is vertical. So a feature module can be dependent on other technical modules and a technical module can dependent on other technical ones or on the base, but for example the base cannot depend on the feature.

The above picture shows a possible modular architecture. Base module is just a possibility, like holding dependencies.gradle or something that you fancy not to separate into separate modules. Technical modules only contain common things, like network module provides Retrofit and it is dependent on config (urls, headers), logging and analytics/experiments:

So :network module’s only responsibility is to provide a retrofit object for higher modules. A feature specific ApiClient, like getting user data would be placed into its own feature module :user as that is its responsibility. It would use retrofit from the :network module.

The feature modules should be closed, which means I can say from the outside:

SomeModuleProvider.install(context).inject(this)

And with this single line I built up the feature’s whole scoped dependency tree. Then SomeModuleProvider (that provides the feature module’s component) creates new instances or gets instances of singletons from lower layers. All the underlying structure, like ViewModels or even Presenters are getting prepared and get injected dependencies from lower level modules and it basically enables us less parameter passing. More implicit, less explicit function parameters.

The above line — where we get our component and inject stuff like this happens on the entry point of the feature — also defines the scope of the provided dependencies. So let’s consider a feature module having an activity or fragment as the entry point of the feature. With the mentioned technique we inject the ViewModel or to be more precise its factory, and then we retain the the ViewModel. The Factory of the ViewModel injects the dependencies of the ViewModel to the ViewModel. In the below example the Factory is always ‘recreated’ when there is an orientation change — unless the install(…) from the Provider provides it as Singleton.

So as You can see in the above gist, we create the component and retain the ViewModel that gets all its dependencies from the factory that is injected so it can be provided…

DataFlow and State

From now on we will talk about the DataFlow between the different vertical layers:

View → ViewModel → Model || Model → ViewModel → View

The data flow is quite well defined: You obtain data from the network with Volley (khm Retrofit khm), save it to Room, repository decides to fetch the database or query new data from the API and it updates your ViewModel’s LiveData then finally reflect the changes in your view with DataBinding. Or maybe without DataBinding. But it’s still the same, we update the view.

From View point-of-view we just map all the clicks and UI events to data, pass it to the ViewModel which then decides what to do with the different UI actions. Usually it makes some request and update the View based on the Model changes.

One thing that is usually common: this whole system is reactive. Instead of asking data from the different underlying structures and dependencies, the dependencies are pushing data to their consumers. You may ask that “c’mon the data from the API is pulled not pushed from it” — And yeah it’s true, but the whole flow remains reactive as it is triggered by a user event, which is “pushed”.

To sum up every reactive scenario: An actor initiates a model change, that model change initiates a request that updates the model and it is reflected on the UI.

Our ViewModel is there for 2 reasons

  • to connect the dots, the pipes or to say a more programatic term: the streams. It decides what input initiates what action, what action produces what output on the UI.
  • and it holds the current model of the UI in the stream.

Below, there is a piece of code from the View as seen on the picture above: Mapping UI events to types that are then handled by the ViewModel, and also showing a simple render functions that are handling the basic types of UI: Success, Error and Loading.

As you see there are no unsubscribe() / dispose() lines as we also gain an additional benefit: we don’t need to unsubscribe from the subject (that subscribes to UI events) and neither from the LiveData that we observe for model changes. It’s because the uiEvent’s Subject never gets any reference to the Context, and this way it will be cleaned up with the ViewModel, also no leaks can happen if no Context is passed to it. LiveData on the other hand is lifecycle aware, so it will deal with disposing itself automatically.

The next layer is the ViewModel itself: The place where we “store” our state of the UI, and handle connect actions with use-cases.

From this point we have 2 options:

  • to use UiEvent → Action, Result → UiModel transformers and always act on a single state with the help of RxJava operators (Forming a single stream of events, then based on their types handling actions with different transformers, then combine results again, modifying the state and then finally render it on the UI.
  • or not to use transformers and make it a little bit “simpler”.

So here is the “full” view model code for option 2, where I didn’t use any transformers:

As you can see here I have 2 streams: a uiEvents (1st stream) which gets all the input events from the UI. As long as the UI exists it will be catching those events. Based on the event types it calls some repository functions (use cases) that are returning some response and then they are updating the model (2nd stream) with one of the possible outcomes that we already talked about: Success, Error or Loading.

I also transform the errors in my API to so called RetrofitErrors, and based on their type I can show different error messages to the User.

There is some duplication as well that can be avoided easily, but what I wanted to show here is that I always start with a Loading result, then either a Success or an Error.

One of the most important thing is that this way I keep my state in the stream, which is a LiveData.

One benefit of this setup (just like using a BehaviourSubject) is that it will always return the last state I have been — on orientation change it is very useful as it just loads the last available state.

Also it is highly testable as each piece can be tested in separation with providing mocked repo or view and it is also very easy to debug as we always have a current state in the stream.

As for the first option when we use transformers I’d like to post it in a completely separate blogpost in the future with a complete code example as this article already got pretty long and that would make it even longer.

I hope you liked this article, and It was understandable. Thanks for reading.

--

--