Finding a Method to Run
Don't go out of your way to justify stuff that's obviously cool. Don't ridicule ideas merely because they're not the latest and greatest. Pick your own fashions. Don't let someone else tell you what you should like.
Larry Wall, (Perl, the first postmodern computer language—https://www.perl.com/pub/1999/03/pm.html/)
The Perl community has a mantra: TIMTOWTDI (pronounced "Tim Toady"). It stands for "There Is More Than One Way to Do It" and reflects the design principle that the language should enable its users to write programs in the way in which they are thinking and not in the way that the language designer thought about it. Of course, TIMTOWTDI is not the only way to do it, and the Zen of Python—http://wiki.c2.com/?PythonPhilosophy takes a different (though not incompatible) tack:
There should be one-- and preferably only one --obvious way to do it.
So, how is a method found? There is more than one way to do it. The first, and easiest to understand, is that an object has a method with the same name as the message selector, and the language assumes that when you send that message, it's because you want to invoke that method. That's how this looks in Javascript:
const foo = {
doAThing: () => { console.log("I'm doing a thing!"); }
}
foo.doAThing();
The next way is the most general, and doesn't exist in all languages and is made difficult to use in some. The idea is to have the object itself decide what to do in response to a message. In Javascript that looks like this:
const foo = new Proxy({}, {
get: (target, prop, receiver) => (() => {
console.log("I'm doing my own thing!");
}),
});
foo.doAThing();
While there are many languages that don't have syntax for finding methods in this way, it's actually very easy to write yourself. We saw in the section on functional programming that an object is just a function that turns a message into a method, and so any language that lets you write functions returning functions will let you write objects that work the way you want them to. This argument is also pursued in the talk Object-Oriented Programming in Functional Programming in Swift—https://www.dotconferences.com/2018/01/graham-lee-object-oriented-programming-in-functional-programming-in-swift.
Almost all programming languages that have objects have a fall-through mechanism, in which an object that does not have a method matching the message selector will look by default at another object to find the method. In Javascript, fully bought into the worldview of Tim Toady, there are two ways to do this (remember that this is already the third way to find methods in Javascript). The first, classic, original recipe Javascript way, is to look at the object's prototype:
function Foo() {};
Foo.prototype.doAThing = () => { console.log("Doing my prototype's thing!"); };
new Foo().doAThing();
And the second way, which in some other languages is the only way to define a method, is to have the object look at its class:
class Foo {
doAThing() { console.log("Doing my class's thing!"); }
}
new Foo().doAThing();
A little bit of honesty at the expense of clarity here: these last two are actually just different syntax for the same thing; the method ends up being defined on the object's prototype and is found there. The mental model is different, and that's what is important.
But we can't stop there. What if that object can't find the method? In the prototype case, the answer is clear: it could look at its prototype, and so on, until the method is found, or we run out of prototypes. To an external user of an object, it looks like the object has all of the behavior of its prototype and the things it defines (which may be other, distinct features, or they may be replacements for things that the prototype already did). We could say that the object inherits the behavior of its prototype.
The situation with inheritance when it comes to classes is muddier. If my object's class doesn't implement a method to respond to a message, where do we look next? A common approach, used in early object environments such as Simula and Smalltalk, and in Objective-C, Java, C#, and others, is to say that a class is a refinement of a single other class, often called the superclass, and to have instances of a class inherit the behavior defined for instances of the superclass, and its superclass, until we run out of superclasses.
But that's quite limiting. What if there are two different classes of object that one object can be seen as a refinement of? Or two different classes that describe distinct behaviors it would make sense for this object to inherit? Python, C++, and others allow a class to inherit from multiple other classes. When a message is sent to an object, it will look for a method implementation in its class, then in...
...and now we get confused. It could look breadth-first up the tree, considering each of its parents, then each of their parents, and so on. Or it could look depth-first, considering its first superclass, and its first superclass, and so on. If there are multiple methods that match a single selector, then which is found will depend on the search strategy. And of course, if there are two matching methods but with different behavior, then the presence of one may break features that depend on the behavior of the other.
Attempts have been made to get the benefits of multiple inheritance without the confusion. Mixins—https://dl.acm.org/citation.cfm?id=97982 represent "abstract subclasses," which can be attached to any superclass. This turns a single-superclass inheritance system into one that's capable of supporting a limited form of multiple inheritance, by delegating messages to the superclass and any mixins.
However, this does not address the problem that conflicts will arise if multiple mixins, or a superclass and a mixin, supply the same method. A refinement to the idea of mixins, called traits, introduces additional rules that avoid the conflicts. Each trait exposes the features it provides, and the features it requires, on the class into which it is mixed. If the same feature is provided by two traits, it must either be renamed in one or be removed from both and turned into a requirement. In other words, the programmer can choose to resolve the conflict themselves by building a method that does what both of the traits need to do.
So, inheritance is a great tool for code reuse, allowing one object to borrow features from another to complete its task. In "Smalltalk-80: The Language and its Implementation," that is the justification for inheritance:
Lack of intersection in class membership is a limitation on design in an object-oriented system since it does not allow any sharing between class descriptions. We might want two objects to be substantially similar, but to differ in some particular way.
Over time, inheritance came to have stronger implications for the intention of the designer. While there was always an "is-a" relationship between an instance and its class (as in, an instance of the OrderedCollection class is an OrderedCollection), there came to be a subset relationship between a class and its subclasses (as in, SmallInteger is a subclass of Number, so any instance of SmallInteger is also an instance of Number). This then evolved into a subtype relationship (as in, you have only used inheritance correctly if any program that expects an instance of a class also works correctly when given an instance of any subclass of that class), which led to the restrictions that tied object-oriented developers in knots and led to "favor composition over inheritance": you can only get reuse through inheritance if you also conform to these other, unrelated requirements. The rules around subtypes are perfectly clear, and mathematically sound, but the premise that a subclass must be a subtype does not need to be upheld.
Indeed, there's another assumption commonly made that implies a lot of design intent: the existence of classes. We have seen that Javascript gets on fine without classes, and when classes were added to the language, they were implemented in such a way that there is really no "class-ness" at all, with classes being turned into prototypes behind the scenes. But the presence of classes in the design of a system implies, well, the presence of classes: that there is some set of objects that share common features and are defined in a particular way.
But what if your object truly is a hand-crafted, artisanal one-off? Well, the class design community has a solution for that: Singleton – the design pattern that says, "class of one." But why have a class at all? At this point, it's just additional work, when all you want is an object. Your class is now responsible for three aspects of the system's behavior: the object's work, the work of making the object, and the work of making sure that there is only one of those objects. This is a less cohesive design than if you just made one object that did the work.
If it were possible (as it is in Javascript) to first make an object, then make another, similar object, then more, then notice the similarities and differences and encapsulate that knowledge in the design of a class that encompasses all of those objects, then that one-off object would not need to be anything more than an object that was designed once and used multiple times. There would be no need to make a class of all objects that are similar to that one, only to constrain class membership again to ensure that the singleton instance cannot be joined by any compatriots.
But as you've probably experienced, most programming languages only give you one kind of inheritance, and that is often the "single inheritance, which we also assume to mean subtyping" variety. It's easy to construct situations where multiple inheritance makes sense (a book is both a publication that can be catalogued and shelved and it is a product that can be priced and sold); situations where single inheritance makes sense (a bag has all the operations of a set, but adding the same object twice means it's in the bag twice); and situations where customizing a prototype makes sense (our hypothesis is that simplifying the Checkout interaction by applying a fixed shipping cost instead of letting the customer choose from a range of options will increase completion among customers attempting to check out). It's easy to consider situations in which all three of those cases would simultaneously apply (an online bookstore could easily represent books, bags, and checkouts in a single system), so why is it difficult to model all of those in the same object system?
When it comes down to it, inheritance is just a particular way to introduce delegation – one object finding another to forward a message on to. The fact that inheritance is constrained to specific forms doesn't stop us from delegating messages to whatever objects we like, but it does stop us from making the reasons for doing so obvious in our designs.