The people who say this are right about the conclusion, in spite of being wrong about the reasons (in my not-so-humble opinion). All this OOP purity stuff that the article goes on and on about is BS.
But enough of that. Let me make an observation and a practical recommendation.
The observation is that lots of Java programmers just put getters and setters into their classes by default. They don't stop and ask whether the particular class should have them—they just boilerplate them in. Somebody has told them that this is "good practice," and they've just adopted it. The IDEs abet them in this—they all provide some facility to generate this stuff, even putting "helpful" documentation comments on the code:
private Foo foo;
/**
* Gets the {@link Foo}.
*
* @returns the {@link Foo}.
*/
public Foo getFoo() {
return foo;
}
/**
* Sets the {@link Foo}.
*
* @param foo the {@link Foo}.
*/
public void setFoo(Foo foo) {
this.foo = foo;
}
Recommendation: if that description applies to you, give the following a shot instead:
private final
by default. Don't make a field non-final unless the field's value will concretely need to be modified multiple times during the object's lifetime. If it would be set only once and never used again, really try your best to make it final
.final
instance fields work in Java is that they must be initialized in the class's constructors—the class won't compile otherwise. So go ahead and do that.final
fields can't be modified once the object is constructed, it doesn't make sense to have setters for these fields.final
fields, see if you can isolate them to short-lived, "throwaway" objects that are constructed by more permanent, immutable ones that contain all the other necessary values in final
fields.This has lots, lots of upsides. I will name two:
final
, the more that the object's future behaviors are determined when it's constructed. This means that the object's behavior remains much more predictable even in a very complex codebase. If you construct an object, pass it to some other code, and then use the object after that, the number of ways the stuff "in between" can modify the object are limited, so you can much more successfully predict how the object will behave on the later call.This sort of problem bit me just last week when I was modifying a third-party open source library—I change a class's constructor to modify the way that a certain field was initialized, and then when I test the class it behaves the same way as before. Turns out that a couple of methods in one of the classes that consumed the object was calling the field's setter and putting it in the state that the original constructor did, because there was a code path where the field in question was not initialized.
The downsides are: