What difficulties/challenges do you face in unit testing (or overall testing)

IMO try to think about why you're writing the tests.

Suppose you took apart a jet engine, and tested every part individually, and then put the engine back together. If those were the only tests, how confident would you feel that the plane will fly? Personally, for me, I'd be 0% confident. For sure, such tests rule out a certain class of bugs, but there's still plenty of opportunity for failure that only integration and user-acceptance tests will catch.

So first of all, don't sink too much time into an activity that can at best give you 0% confidence that your software works, and you must have other good reasons to write these tests at all.

One possible reason is that you have a "unit" that you want others to feel confident using and reusing, i.e. a respiratory or an in-memory cache. Here, your "users" are "other programmers" or even your future self. You can't vouch for the entire application, but you can at least say that your unit works as advertised under all conditions, even unhappy paths.

Another possible reason is that you want a coarse-grained filter of fast-running tests that will catch big issues before more expensive test stages are run (e.g. automated user acceptance tests). Be careful here, though. Unit tests may be cheaper to write and maintain than automated UATs, but they're still an additional expense. You probably can't feel confident releasing your software without running UATs on all of your high-priority test cases (whether tested manually or automated via something like Espresso) and so unit tests can't replace the more-expensive UATs; they'll always be an additional cost. So, for this reason to work, the unit tests really need to pay for themselves in terms of preventing UATs from needing to be run on software that is known to be broken.

The reason I'm belaboring this is because I've seen a lot of devs write crappy, tiny little unit tests around every granule of software, mocking absolutely everything around it, in order to hit some coverage threshold, and that is just such a useless activity. All cost and almost no benefit.

Let me tell you something, when you isolate a class with test doubles, your essentially making a lot of assumptions about how those collaborators behave, which in turn affects how your SUT behaves, and could mean that your tests are totally useless compared to real-life conditions.

I like to test in layers. Example:

So I've got an in-memory cache that is used in production, but can also be used as a database fake in tests. So I've got a full set of tests against that.

I'm using the Repository pattern and I want other devs (including my future self) to have confidence in my Repositories, and so I have sets of tests against each Repository, each using stubbed network response and an in-memory cache as a fake database. Everything else is real, including mappers that translate between database entities / network DTOs and domain models. One test per requirement (including unhappy paths).

So then I move onto the UI. I like MVVM and I like making the View (ie Activity, Fragment, or Composable UI) to be as thin and dumb as possible. So, all presentation logic is in the ViewModel, which exposes just one property, a stream of ViewState (could be an Rx Observable or coroutine Flow). The ViewState exactly describes the appearance of the View so that all the View has to do is bind to it. The VM also has a set of functions in its interface that represent user actions like emailFieldUpdated and submitButtonClicked.

In terms of testing this, I'd probably NOT mock and Repositories that the VM is using; instead use real Repositories albeit once again with a fake database and stubbed server responses. If you're clever, you can reuse server stubs from the Repository tests.

Now, some may think this is no longer a unit test, it's an integration test, and maybe they're right, but I'm not interested in isolation per se. I don't want UI tests that think everything is hunky dory when the data layer is broken. What I want is tests that are deterministic and fast, and breaks if (and only if) the code is actually broken. This satisfies all those requirements, while side -stepping the whole "2 unit tests 0 integration tests" problem. These tests also tend to look like mini UATs that test "under the skin" of the UI. They would be the easiest tests for product managers to understand.

And in a typical MVVM/Repository architecture, all of the above would be pretty much the only "unit tests" I would bother writing. In a way, although I generally agree with the Testing Triangle in terms of the number of tests you end up with, I think paradoxically the lower a test is on that pyramid, the more carefully you have to think about the cost/benefit/ROI and basically justifying the test's existence. The benefit becomes more abstract and you really have to think about what value they're bringing in order to justify the cost.

/r/androiddev Thread