Functional Programming Is a Silver Bullet
Having written a decent amount of code in F# (our managed code base is largely written in F#), a fair amount in Scala, and a smattering in Miranda and Scheme, I have come to the conclusion that functional programming is a huge benefit in software development.
Functional programming leans towards a style that encourages:
- Solution through function composition
- More thorough type safety than typical OO languages
- Minimal dependence side-effects
- Expressive language integration of co-routines for data/sequence definition (e.g., for and seq comprehensions)
- Better compiler optimization potential through guarantee of immutability
- Simpler support for parallel computing
- Auto genericization through type inference
When you consider productivity in software engineering, the gains with using any given language have to do with making certain classes of bug impossible, reducing complexity, reducing time in write-test cycle, ease of mapping common abstractions, ease of reading code, ease of maintaining existing code, and so on.
F# adds to most of these things. For example, using discriminated unions to represent nodes in a parse tree makes it impossible to write a tree walker that misses a new node type without intentionally skipping it in a default case. By comparison, if you do this in C# with inheritance you will trip a runtime error (hopefully) instead of a compile-time error.
One of the most touted wins is the encouraged immutability which, by eliminating the possibility of side-effects, reduces complexity in code, especially multi-threaded code where possible side-effects can make a combinatorial explosion of all the possible conditions for testing.
Functional Programming Is NOT a Silver Bullet
One of the typical idioms of functional programming is solution through composition. This is a fantastic way to solve problems. It provides standard ways to iterate, aggregate, filter, and search which are parameterized by a function to make the key choices or operations. The problem with this is that the barrier to create a new function is too low. I know – crazy talk. It’s functional programming, right? The end result is encouraging a style where the key function is written inline, this results in a code base that has hundreds of one-time use functions that should have been shareable. I’m not saying it’s impossible to share the functions, just that the barrier for writing an inline function is so low that it discourages you from correctly asking the larger questions of “have I written this already?” “should this be a shared function?” “how should I organize my code?”
Partial function application is fabulous. I love it. It is also the tool of the devil, especially when refactoring. For me the gains of partial function application are lost in the common task of refactoring a function to have more or fewer arguments. With type inference in play, I’ve had code compile that would never compile in a typical procedural language. I suppose I could rewrite the functions that I never intend to partially apply (which is most of the time) to take a tuple of arguments as a single argument, but ugh.
Unfortunately, F# also encourages wearing your underwear on the outside. That is, if you use the main aggregating types (records, discriminated unions, tuples), you’re making the representation of your type public, which is going in the face of what we’ve learned in software engineering: encapsulation is a very good thing. No, this is not a straw man argument because, yes I do know that you can make all record elements private (which only works within a module, by the way) and make public accessors (as well as just using an object). The point is not whether or not it’s possible, it’s what the language encourages you to do. The end result is that unless I’m scrupulous about having a front-facing API that is detached from the underlying implementation, I (and all my clients) can look forward to cascading compiler errors. This is also a problem with the collection types in F# (array, list, seq) in that there no common interface between them, even though there is a great deal of common behavior (fold, map, find, etc). I understand the reasoning behind it: specific implementations will be more efficient for the task at hand, but this also gets in the way of hiding the implementation detail should they need to change and hiding the implementation detail gets in the way of solution by composition. Again, I know that you could pick your own generalized interface and adapt all the types to it. Heck – for grins, I wrote a generic version of fold that operates on a discriminated union of collection types:
type gentype<'a> = | AList of 'a list | AArr of 'a | ASeq of 'a seq let genfold (fn: 'seed-> 'elem ->'seed) (seed:'seed) (coll:gentype<'elem>) = match coll with | AList al -> List.fold fn seed al | AArr ar -> Array.fold fn seed ar | ASeq asq -> Seq.fold fn seed asq
The point is not that it can’t be done but rather, does it go against the grain of what the language wants you to do.
If I need to refactor code that uses lists to seq and I have not hidden this API detail, I’ve now added a maintenance burden on me as well as everyone else who consumes my code. Have I undone the benefits from the rest of gains? I hope not, but changes of this variety are hugely irritating to me as a consumer and even more so as a producer. To give you an idea, in my 10 years at Atalasoft, I can recall breaking precisely one public API and I put in a warning that it was going to happen more than a year ahead of time. The number of support calls about that API: 0.
Ultimately, this is not an argument of “ZOMG language X is TeH better than language Y”, but more an understanding that while there are a number of good things that come with any new language, there are also a number of things that are lost. For nearly every benefit of F#, you can do the same thing with a procedural language (this comes part and parcel with being Turing complete), but procedural languages encourage a different style which in turn encourages a different set of mistakes (unnecessary mutability or poor control of side-effects, for one) that in turn damage productivity.
Sadly, I firmly believe that there is no silver bullet, nor will there be because there are always and always will be complex problems, but perhaps functional programming is a silver-plated bullet.
About the Author
Steve was with Atalasoft from 2005 until 2015. He was responsible for the architecture and development of DotImage, and one of the masterminds behind Bacon Day. Steve has over 20 years of experience with companies like Bell Communications Research, Adobe Systems, Newfire, Presto Technologies.Follow on Twitter More Content by Steve Hawley