Executive Summary:
* “Open Source” does not mean “Collaborative Development”. The LF does the latter.
* To participate in open source projects successfully, requires education and a change in culture for organizations new to it.
* No one has done what we are trying to do. This is a very large experiment.
I don’t really know when the free and open source movement started, but I know folks that were doing open source in the late 1970s. Linus Torvalds started the Linux project in 1991 as a Finnish college student who aspired to write a Unix-like operating system for his desktop computer. Linux, like the other free and open source software projects at the time, was driven by a relatively small number of software development enthusiasts incrementally improving on each other’s code and constantly sharing those improvements back with one another. The key point here is that those projects started out very small. Almost no code, and a very small number of developers. Process, policies, and rules for engagement evolved slowly as issues were encountered and dealt with. The participants had no financial motivations. They were just sharing code.
In the year 2000, IBM made a dramatic statement about Linux in that they were going to invest a billion dollars in it in the coming 10 years. In 2001, when I joined HP, we were ramping up our engagement in the Linux Community to compete for Linux customers. That was the first large-scale, corporate effort, in an open source project. HP, IBM, Intel, Oracle, Red Hat and others were all putting significant development resources into the Linux community. I credit Linus with building a development culture that has allowed Linux to become such an incredibly valuable asset to the planet [0]. Much of what I preach to the communities at the Linux Foundation today is based on what I learned as a member of that community.
Linux has changed the computer server industry (among many others), forever. Today, there are tens of millions of open source projects. A valid question is “Why?”. How can it possibly make sense to hire developers that work on code that is given away for free to anyone who cares to take it? I know of many answers to this question, but for the communities that I work in, I’ve come to recognize the following as the common thread.
Software has become the most important component in many industries, and it is needed in very large quantities [1][2]. When an entire industry needs to make a technology “pivot”, they often do as much of that as possible in software. For example, the telecommunications industry has such a pivot to make in order to support 5G, the next generation of mobile phone network. Not only will the bandwidth and throughput be increased with 5G, but an entirely new set of services will be enabled, including autonomous cars, billions of internet-connected sensors and other devices (aka The Internet of Things IoT) etc. In order to do that, the telecom operators need to entirely redo their networks distributing millions of compute and storage instances very, very close to those devices/users. Given the drastic changing usage of the network, the operators need to be able to deploy, move and/or tear-down services near instantaneously running them on those far-flung compute resources and route the network traffic to and through those service applications in a fully automated fashion. That’s a tremendous amount of software. In the “old” model of complete competition, each vendor would build their solution to this customer need from the ground up and sell it to their telecom operator customers. It would take forever, cost a huge amount of money, and the customers would be nearly assured that one vendor’s system wouldn’t inter-operate with another vendor’s solution. The market demands that solutions don’t take that long or cost that much, and if they don’t work together, their value is much less for the customer.
So, instead, all the members of the telecom industry, both vendors and customers are collaborating to build a large portion of the foundational platform software together, just once. Then, each vendor and operator will take that foundation of code and add whatever functionality they feel is differentiating for their customers, test it, harden it, and turn it into a full solution. This way, everyone gets to a solution much more quickly and with much less expense than would otherwise be possible. The mutual benefit of this is obvious. But how can they work together? How can they ensure that each participant in this community can get out of it what they need to be successful? These companies have never worked together before. Worse yet, they are fierce lifelong competitors with the only prior goal of putting the other out of business.
This is what my team does at the Linux Foundation. We create and maintain that level playing field. We are both referee and janitor. We teach what I learned from Linus’ governance of the Linux project. Stay tuned for more blog posts detailing those principles and my experiences living those principals both as a participant in open source projects and as the referee.
So, bringing dozens of very large, fierce competitors, both vendors and customers, together and seeding the development effort with several million lines of code that usually only come from one or two companies is the task at hand. That’s never been done before by anyone. The set of projects under the Linux Foundation Networking umbrella is one large experiment in corporate collaborative development. Taking ONAP as an example, it’s successful outcome is not assured in any way. Don’t get me wrong. The project has had an excellent start with three releases under its belt and I immensely enjoy working with the developers, their managers, and senior executives driving us forward. In general, things are going very well. However there is much work to do and always room for improvement; ways for this community, and the organizations behind it, to become more efficient, and get to our end goal faster. For some things, we’re making it up as we go. Again, such a huge “industry pivot” has not been done as an open source collaboration before. To get there, we are still applying the principles of fairness, technical excellence, and transparency that are the cornerstone of truly collaborative open source development ecosystems. As such, I am certainly optimistic that we will succeed in the end.
This industry-wide technology pivot is not isolated to the telecom sector. We are seeing it in many others. My goal in writing this series of blog posts on open source collaborative development principals, best practices, and experiences is to better explain to those new to this model, how it works, why these principals are in place and what to expect when things are working well, and when they are not. There are a variety of non-obvious behaviors that organizational leaders need to adopt and instill in their workforce to be successful in one of these open source efforts. Hopefully I can give you the tools to help you facilitate this culture-shift within your organization.
[0] https://www.wired.com/2016/08/linux-took-web-now-taking-world/
[1] https://a16z.com/2016/08/20/why-software-is-eating-the-world/
[2] https://www.cio.com/article/3243988/digital-transformation/the-ultimate-guide-to-digital-transformation-and-its-impact-on-software-development.html