It’s all about immutability
I was having a pint with some Java peeps the other week and was hopelessly trying to explain why Java 8 is not a proper functional language (I’m thinking about
Haskell and Scala…not Clojure because this text is already in brackets and I don’t want any more of them).
We were 2 pints in and I think I opened with something like, “it’s all about immutability”. This statement provoked absolutely no response from those gathered. In fact, I whittled on for another 2 minutes without eliciting so much as an eyebrow raise. I tried to backtrack with, “you remember what Josh Bloch says in ‘Effective Java’ about immutability. It never made sense to me at the time but now I see why”… still nothing.
The reason it never made sense to me is because it was wholly error prone. The developer had to make sure all fields were final and that getters and setters were defined properly. Then you had to make sure equals and hashCode methods were correctly implemented. Admittedly you could get the IDE to generate all this boilerplate for you but you still had to go through the bore of doing it. Enter Scala’s case class. No need to generate boilerplate. No temptation to make introduce mutability into the class – because you can’t.
Why is immutability key? Well, there’s the quick response how it makes concurrency safer and hence easier. But why is that? Well if concurrency is safer then it’s most likely because your code is less side-effecting. I would go on to argue that if there are less side-effects then your code is most probably easier to read and easier to reason about. I’m sure many would disagree as Scala code can have the unfortunate propensity to never declare variables. A lack of variables can create dense code which takes more than a cursory glance to understand – but you do get used to this, trust me.
If code is easier to reason about then, I would argue, it follows that more complex business logic can be written. As Scala achieves higher levels of abstraction (e.g. with Futures) then it is possible to write code that does more, faster and better.
Immutability makes even more sense when you consider microservices or technologies like Akka/Spark. If units of data are being sent to remote systems then they must have gone through some serialisation/deserialisation process. This marshalling and unmarshalling of data is inadvertently enforcing some kind of immutability as memory pointers must reference some finite object graph. So what? If you are writing code that works with immutable data then you are most likely writing code that will work with all these new fandangled data processing systems. Which is a good thing!
An imperative programmer could counter with the argument that immutability is not “performant” and causes extra garbage collection. “Sorry, I see you are working with a list of a few thousand elements. Are you having some performance issues? Can your 486 not handle all that object creation and GC? …..oh no wait you have a 4Ghz i7. Yes I forgot it’s not 1990 anymore”. I have no doubt that working with immutable data incurs much more object creation and copying. But if you consider my point about how more complex business logic is easier to implement and understand then which is more expensive?
- New Xeon core server
- Team of developers trying to work out how some low-level, imperative code works (i.e. Java)