Put That All Together
All of the parts explored here exist, but not in the same place. Putting these together is a significant undertaking; building message passing and delegation between objects in separate processes may only take a few source lines, design-by-contract is a judicious application of the assert() statement, but a whole interactive environment to allow live development and debugging of such a system is a much bigger undertaking. So why consider it?
Speed
When the development environment and the deployment environment are the same, developers get a higher fidelity experience that makes turnaround time on development lower by reducing the likelihood that a change will "break in CI" (or even in production) due to differences in the environments.
The people using the software can have higher confidence too, because they know that the developer has built the thing in the same environment it will be used in. Additionally, the use of contracts in this proposed development system increases confidence, because the software is stated (and demonstrated) to work for all satisfactory inputs rather than merely a few test cases thought of by the developer.
Such fidelity is typically provided to developers at the expense of speed. Programmers connect over the network to a production-like server or wait for virtual machine or container images to be constructed on their local system. This time gets added to the typical steps, such as compiling or linking that come from separating development and deployment, giving us time to get distracted and lose our thread of concentration while getting validation of our work so far.
Ultimately, though, the speed comes from experimentation. When development is close to deployment, it's easier to ask questions such as "what if I change this to be like that?" and to answer them. When systems are decomposed into small, isolated, independent objects, it's easier to change or even discard and replace objects that need improvement or adaptation.
While there is value in designing by contract, there is also value in progressively adding details to an object's contract as more properties of the system being simulated become known, and confidence in the shape of the objects increases. Contracts are great for documentation and for confidence in the behavior of an object, but those benefits need not come at the expense of forcing a developer's train of thought to call at particular stations in a prescribed order. As we saw in Chapter 1, Antithesis, a lot of complexity in object-oriented programming to date came from requiring that software teams consider their use cases, or their class hierarchies, or their data sharing, or other properties of the system at particular points in an object-oriented software engineering process.
It's far better to say, "here are the tools, use them when it makes sense," so that the developer experience is not on rails. If that means taking time designing the developer system so that use, construction, documentation, testing, and configuration of the thing being developed can happen in any order, then so be it.
Tailoring
Such experimentation also lends itself to adaptation. A frequent call for the industrialization of software involves the standardization of components and the ability for end users to plug those components together as required. Brad Cox's Software ICs, Sal Soghoian's AppleScript dictionaries, and even the NPM repository represent approaches to designing reuse by defining the boundary between "things that are reused" and "contexts in which they are reused."
In all of these situations, though, the distinction is arbitrary: a Software IC could implement a whole application, or the innards of a Mac app could be written in AppleScript. In a live development environment, the distinction is erased, and any part is available for extension, modification, or replacement. There is a famous story about Dan Ingalls adding smooth scrolling to a running Smalltalk system (http://www.righto.com/2017/10/the-xerox-alto-smalltalk-and-rewriting.html) during a demo for a team from Apple Computer that included Steve Jobs. At that moment, Dan Ingalls' Alto computer had smooth scrolling, and nobody else's did. He didn't need to recompile his Smalltalk machine and take the computer down to redeploy it, it just started working that way.
My assertion is that the addition of contracts to a live programming environment enables experimentation, customization, and adaptation by increasing confidence in the replacement parts. Many object-oriented programmers already design their objects to adhere to the Liskov Substitution Principle, which says (roughly) that one object can act as a replacement for another if its preconditions are at most as strict as the other object's, and its postconditions are at least as strict.
In current environments, however, this idea of substitutability is unnecessarily coupled to the type system and to inheritance. In the proposed system, an object's inheritance or lack thereof is its own business, so we ask a simpler question: is this object's contract compatible with that use of an object? If it is, then they can be swapped and we know that things will work (at least to the extent that the contract is sufficient, anyway). If it is not, then we know what will not work, and what adaptation is required to hook things up.
Propriety
"But how will we make money?" has been a rallying cry for developers who don't want to use a new tool or technique for decades. We said we couldn't make money when free and open source software made our source code available to our users, then started running GNU/Linux servers that our users connect to so they can download our JavaScript source code.
The system described here involves combining the development and deployment environments, so how could we possibly make money? Couldn't users extract our code and run it themselves for free, or give it to their friends, or sell it to their friends?
Each object on the system is an independent program running in its own process, and its interface is the loosely coupled abstraction of message-sending. Any particular object could be a compiled executable based on a proprietary algorithm, distributed without its source code. Or it could be running on the developer's own server, handling messages remotely, or it could be deployed as a dApp to Ethereum or NEO. In each case, the developer avoids having to deploy their source code to the end user, and while that means that the user can't inspect or adapt the object, it does not stop them from replacing it.
It is interesting to consider how the economics of software delivery might change under such a system. At the moment, paid-outright applications, regular subscription fees, and free applications with paid-for content or components are all common, as are free (zero cost) applications and components. Other models do exist: some API providers charge per use, and blockchain dApps also cost money (albeit indirectly via tokens) to execute the distributed functions. An app or a web service has a clear brand, visible via the defined entry point for the user (their web address, or home screen icon). How might software businesses charge for the fulfilment of a programmatic contract, or for parts of an application that are augmented by other objects, or even replaced after deployment?
Security
It was mentioned when discussing the propriety of objects that each object is hidden behind the loosely coupled message-sending abstraction. Implications on the security of such a system are as follows:
- For an object to trust the content of a message, it must have sufficient information to make a trust decision and the confidence that the message it has received is as intended with no modifications. Using operating system IPC, the messages sent between objects are mediated by the kernel, which can enforce any access restrictions.
- "Sufficient information" may include metadata that would be supplied by the messaging broker, for example, information about the context of the sender or the chain of events that led to this message being sent.
- The form in which the object receives the message does not have to be the form in which it was transmitted; for example, the messaging layer could encrypt the message and add an authentication code on sending that is checked on receipt before allowing the object to handle the message. Developers who work on web applications will be familiar with this anyway, as their requests involve HTTP verbs such as GET or POST and readable data such as JSON, but are then sent in a compressed format over encrypted, authenticated TLS channels. There is no reason such measures need to be limited to the network edges of an application nor (as evinced with a microservices architecture) for the network edge and the physical edge of the system to be in the same place.
Multiprocessing
Computers have not been getting faster, in terms of single-task instructions per second, for a very long time. Nonetheless, they still are significantly faster than the memory from which they are loading their code and data.
This hypothesis needs verifying, but my prediction is that small, independent objects communicating via message passing are a better fit for today's multi-core hardware architectures, as each object is a small self-contained program that should do a better job of fitting within the cache near to a CPU core than a monolithic application process.
Modern high-performance computing architectures are already massively parallel systems that run separate instances of the workload that synchronize, share data, and communicate results via message sending, typically based on the MPI standard. Many of the processor designs used in HPC are even slower in terms of instruction frequency than those used in desktop or server applications, but have many more cores in a single package and higher memory bandwidth.
The idea of breaking down an application to separate, independent objects is compatible with the observation that we don't need a fast program, but a fast system comprising multiple programs. As with cloud computing architectures, such systems can get faster by scaling. We don't necessarily need to make a faster widget if we can run tens of copies of the same widget and share the work out between them.
Usability
All of this discussion focuses on the benefits (observed or hypothesized) of the approach to writing software that has been developed in this book. We need to be realistic, though, and admit that working in the way described here is untested and is a significant departure from the way programmers currently work.
Smalltalk programmers already love their Smalltalk, but then C++ programmers love their C++ too, so there isn't a one-size-fits-all solution to the happiness of programmers, even if it could be shown that for some supposed objective property of the software construction process or the resulting software, one tool or technique had an advantage over others.
Some people may take a "better the devil you know" outlook, while others may try this way (assuming such a system even gets built!) and decide that it isn't for them. Still others may even fall in love with the idea of working in this way, though we could find that it slows them down or makes lower quality output than their current way of working! Experimentation and study will be needed to find out what's working, for whom, and how it could be improved.
This could turn out to be the biggest area of innovation in the whole system. Developer experiences are typically extremely conservative. "Modern" projects use the edit-compile-link-run-debug workflow that arose to satisfy technical, not experiential, constraints decades ago. They are driven from a DEC VT-100 emulator. Weirdly, that is never the interface of choice for consumer products delivered by teams staffed with designers and user experience experts.