Again, you’re speaking from the point of view of, “we need to stop idiots.”
I am speaking from the point of view, “we need to help people.”
Java prevents accessing private members, except through reflection. Is everyone going crazy abusing this loophole? No. But on rare occasion it is of tremendous benefit. Mostly, it is well-maintained, excellent libraries that use this loophole, not ordinary, “imbicile” developers.
Unfortunately, the JVM has no loophole for forcing a derivation of a final class, which hurts, for example, Mockito.
You need to address the question head-on. How should developers, framework creators, etc, accomplish the things they’re used to in Java? Should they use interfaces everywhere in order to support mocking? I’m sure there are workarounds, but please go into the details.
I agree, closed by default is the way to go.
I feel that most “correct” code is walled off from mistakes. In my uses cases I very rarely run into open classes. Typically where an open class would be is an abstract class. The easiest route should produce the most correct code. As for wanting to extend some library’s classes… isn’t this what extensions and non-sugar delegation is for? (Hint: It might be nice to have sugary delegation for all types.)
You might think this means I didn’t support
public by default, but I did by far. Kotlin, unlike Java, makes it easy to write “correct” code because
val is so convenient, all types are objects, and thus behavior can be extended later. In my use cases I run into things I want
public most of the time, and a keyword or two here or there is nothing compared to the ceremony of
public everywhere. Good riddance!
I’m not speaking from the “we need to stop idiots” view point. I’m just saying that compromises don’t help.
A language where all methods are open by default is a sensible design, as demonstrated by Java. A language where all methods are closed by default is also a sensible design, as demonstrated by C#. A design where the designers say “we really want all methods to be closed by default, but we really want all of them to sometimes be open, so we say that the method is closed unless the user says “pretty please”” is the worst of both worlds. It lacks the safety benefits of the “all closed” design, and it lacks the ease-of-use benefits of the “all open” design.
As for mocking, my personal opinion (I’m not saying this as a decision-maker; just as a developer with some experience) is that support for mocking arbitrary classes is a non-goal. I’m a firm believer in the Chicago school of TDD. Most of interaction-based tests that I have seen end up as a rephrasing the code under test where every method call is replaced with a Mockito assertion. I fail to see how such tests can give any new information about the system, and they also require updating every time the code under test is changed.
Now, of course, the use of mocking is appropriate for heavy external components such as databases. However, such components are normally abstracted through interfaces anyway.
End of rant, sorry.
The combination of “public by default” and “closed by default” feels a bit strange though. Java is permissive in both regards, C# (more) restrictive in both. Kotlin is permissive in one and restrictive in the other.
Thank you for your lengthier reply. It demonstrates respect for the community.
[Semi-closed classes by default] lacks the safety benefits of the “all closed” design, and it lacks the ease-of-use benefits of the “all open” design.
What do you mean by “safety”? If you’ve thought it through and know what you mean by it, then you should explain it. These are the three possibilities that come to my mind which I’ve addressed.
- Safety for well-meaning developers: preventing routine coding mistakes and runtime exceptions.
- Safety against bad developers: preventing inexperienced/lazy devs coding things in a way that causes maintenance headaches down the line.
- Safety against hackers: preventing the cooption of program behavior or security guarantees with malicious derived classes.
Given Kotlin’s goals of being language for industry and having excellent interoperability with existing code, I’m concerned about the added friction of forcing developers to think about finality in places where they never have before. Frameworks that depend on proxies, class generation, etc will all be impacted. Why be so careful to be compatible with Java visibility, but not finality behavior?
In my experience, “Open by default” has a lot more benefits than risks than the alternative.
I have been inconvenienced a lot more by private and final elements than I have been by accidentally breaking classes that were not meant to be subclassed or methods that were not meant to be overridden. The “fragile base class” problem is way overblown.
It’s one of these situations where the adage “Design for subclassing or prohibit it” sounds great on paper but is terrible in practice.
“Closed by default” smells a lot like the noble idea of checked exceptions to me; I hope it won’t turn out that bad.
I’m not sure why you’re saying that Kotlin is so careful about being compatible with Java visibility. Kotlin doesn’t support Java’s package local visibility, and it introduces internal visibility which is not supported by Java (and uses somewhat ugly workarounds to ensure that internal APIs declared in Kotlin can’t be called by arbitrary Java code).
Once again: I’m not defending the “closed by default” decision right now. By “safety”, I essentially mean "whatever arguments have been put forward by people who advocate for “Design for subclassing or prohibit it”.
My personal position is that the issue is less of a big deal than it is presented to be. In classic Java, creating a class with a bunch of methods is essentially the only way to structure your code, and creating a hierarchy of classes with overridden methods is essentially the only way to customize the behavior of a class. When I’m programming in Kotlin, I simply don’t find myself creating class hierarchies nearly as much as I was doing that in Java. Instead, I put the logic into top-level functions, and I customize behavior of pieces of code by passing around lambdas. If you structure the code in that way, the “open by default” question becomes moot - you can’t override top-level functions anyway, and you can pass your own lambdas to a library class without extending it.
As for frameworks that depend on proxies and code generation, my understanding is that they are usually only applied to specific classes in your codebase (e.g. entities), and you know in advance which classes those are going to be. One idea that we discussed to make it easier to use such frameworks with Kotlin is the “allopen” modifier, which would mark as open the class as well as all the methods inside it, so that you wouldn’t need to repeat “open” on every method.
I am strongly in favor of closed by default. If a class is open by default, then its type won’t present a compiler-enforced contract anymore. In Java, if you didn’t mark all your classes final explicitly, anyone can extend your types to behave incorrectly, so their contracts are only enforced by documentation, instead of being enforced by compiler. I think that completely compile-time validated contracts are much more important than the convenience of being able to extend anything.
+1 to Cedric’s argument in favor of open by default. I have been bitten many times trying to get some legacy third party library to work in a new environment, say as part of a system migration, only to find that they followed the best practice of making all the methods final, so we end up having to do something crazy like decompile the jar, modify the decompiled code and then repackage it instead of being able to just create a subclass that acts as an adapter.
Like a lot of Java’s safety mechanisms, final-by-default ends up being too aggressive sometimes and people find themselves working around it (like the decompile/recompile trick). Hot-patching libraries is a useful ability that people want for a reason. It is not the library developers job to save me from myself by throwing up some sort of pseudo-DRM around patching their code.
But people won’t stop doing this because it’s some consider it a best practice. So I wonder if this is better fixed at the JVM level rather than the language level. A java agent that takes a config file and simply modifies the class at load time to remove the finality would not be too hard to write, and then if you have a framework that you want to hack you can just start using the agent and avoid having to patch the code.
All this would require is an ability to forcibly disable the compiler error you get from attempting to override a final method, perhaps with an annotation.
The other downside is, it means an additional JVM argument. Those can be fiddly, especially to propagate from gradle → IntelliJ. But once done, it’s done.
It’s worse than that. At least with checked exceptions there’s an easy workaround - you can just wrap it in a RuntimeException.
With closed-by-default, you’re stuck with some annoying options and nothing easy. Bytecode patching? Make an issue/submit a PR and wait potentially days/weeks/months for a release?
Can people give examples of subclasses breaking things? There’s quite a bit of talk about “Design for subclassing or prohibit it” but I can’t recall encountering issues due to this.
All those classes in package com.sun… (jdk) that are extended or used directly by java developers, could be an example of such issue.
From what I see, those that are in favor of “open by default”, they are thinking of workarounds for libraries or thirdparty classes. The question is: should we design a language (or tool) for workarounds or proper usage?
I, personally, think final or close by default is the better choice.
The issue with com.sun./sun. packages is that they’re visible/accessible in general. Can you point to an example involving extending from an internal class? An issue involving using an internal class is not related to final/open.
should we design a language (or tool) for workarounds or proper usage?
Closed by default also doesn’t feel like designing for proper usage.
Closed by default in standalone project: No impact. If I want to extend a class and realise it’s not open, I assume it’s because the developer didn’t think and left it the default. I add the open keyword.
Open by default in standalone project: If I want to extend a class and realise it’s final, I assume there’s a good reason for it because someone chose to add the non-default final keyword and find out why. If there is no reason for it, I remove the final keyword.
All this achieves is changing the default to something which requires either very little effort to fix in a standalone project, or much annoyance in a library. I’m willing to bet that most developers won’t think about whether a class needs to be final when creating it, leaving most classes in Kotlin libraries final for no real reason.
Let’s build a tool called JarOpener that opens things up. Final classes are made non-final, private members are public. Even fields can me made public. YOLO.
Library designers can lock things down as they see fit for the long term evolution of their APIs.
App developers that demand arbitrary monkey patching aren’t harmed. When they get stuck on an API that’s too final or too private, they run JarOpener and ‘void the warranty’, opting out of easy upgrades going forward.
This aligns everyone’s incentives.
App developers explicitly choose to go off-road, and should notify library maintainers when they do so, so that hidden trail can be paved.
And library developers aren’t paralyzed trying to move a broad open API forward in a way that’s compatible for all possible monkey patches.
Interestingly, that’s kind of what Google did with Android testing recently: offer an
android.jar where all the
final have been removed, allowing for maximal testing power.
No matter which direction this goes, the discussion here gives me great optimism for a vibrant community going forward.
This is correct - some subset of your own application logic is subject to dynamic proxies. Unfortunately this set isn’t always as fixed as it might seem. I’ll give a couple examples from Spring. (For those that don’t like Spring: I beg of you to put your dislike aside for a moment and see the general applicability to any other framework that uses dynamic proxies for useful value-adding purposes)
@Configuration annotated classes must always be opened, because they are invariably proxied. This is a rather quick rule that any developer operating with Spring and Kotlin will learn early on.
Pretty much any other injectable resource in Spring may or may not be proxied, depending on what set of features you happen to be using. So if I start with a simple Boot application with only Spring MVC, I don’t have to open my @RestController classes. If I wish to add something like Hystrix fallbacks or Spring AOP, something that was previously not proxied now may be proxied depending on whether it is applicable to the feature.
In both cases, I am most allergic to the fact that I discover the finality problem at runtime. To me, the more a language can prove to me the correctness of my code before runtime, the closer it is to perfection. In Kotlin, for example, I don’t find myself writing as many defensive unit tests around null behavior because Kotlin’s null handling eliminates certain classes of problems in this space. By enforcing closed-by-default, Kotlin opens up a new class of problems that I need to add tests for, lest I be surprised at runtime.
I think this idea of monkey patching third party libraries is the basis of the theoretical argument for closed-by-default, but the one I see the least carried out in practice.
Some detailed story can be found in this post