Mostrando entradas con la etiqueta open source. Mostrar todas las entradas
Mostrando entradas con la etiqueta open source. Mostrar todas las entradas

lunes, 28 de julio de 2008

Open architecture and open standards.

By now we all know about the importance of open standards in computing and how those companies that bet on it in the early 1980s ended up winning the war over the home computer. However, at the time the wisdom of betting on open standards was far from obvious.
If you were designing a new computer system that included a new type of bus, you could choose whether to publish (or otherwise make available) the specifications of the bus or to keep then secret.

If the specifications of a particular bus are made public, other manufacturers —so-call third party manufacturers— can design and sell expansion boards that work with that bus. The availability of these additional expansion boards makes the computer more useful and hence desirable. More sales of the computer create more of a market for more expansion boards. This phenomenon is the incentive for designers of most small computer systems that adhere to the principle of open architecture, which allows other manufacturers to create peripherals for the computer. Eventually, a bus might be considered an industry-wide standard. Standards have been an important part of the personal computer industry.

The most famous open architecture personal computer was the original IBM PC introduced in the fall of 1981. IBM published a Technical Reference manual for the PC that contained complete circuit diagrams of the entire computer, including all expansion boards that IBM manufactured for it. This manual was an essential tool that enabled many manufacturers to make their own expansion boards for the PC and, in fact, to create entire clones of the PC —computers that were nearly identical to IBM's and ran all the same software.

The descendants of that original IBM PC now account for about 90 percent of the market in the desktop computers. Although IBM itself has only a small share of this market, it could very well be that IBM's share is larger than if the original PC had a closed architecture with a proprietary design. The Apple Macintosh was originally designed with a closed architecture, and despite occasional flirtations with open architecture, that original decision possibly explains why the Macintosh currently accounts for less than 10 percent of the desktop market. (Keep in mind that whether a computer system is designed under the principle of open architecture or closed architecture doesn't affect the ability of other companies to write software that runs on the computer. Only the manufacturers of certain video games have restricted other companies from writing software for their systems.)

(Petzold: pp. 302-303)

Sure, we all know who was right. However, we also know that Apple has come back to life and become, once more, one of the most innovative companies in the business. Not only that, but few people would doubt that the main reason why they could afford this level of creativity is precisely because they control both the software and the hardware. It's part of the Apple experience that guarantees a smooth transition between products and a near-perfect compatibility between two different Apple devices.

Likewise, we also saw at the end of the 1990s the appearance of a new disruptive phenomenon: the open source movement. Where open standards applied mainly to hardware specs in the past —and Microsoft benefitted a lot from it—, Linux and other projects tried to spread the same philosophy now to the software world too. That's precisely why the moaning about the supposedly Communistic threat coming from the open source world that we hear from Microsoft's top execs from time to time is so self-serving. After all, they are the first ones who benefitted from IBM's decision not to play dirty and share the specs.

In any case, few people doubt that without these open standards the Internet wouldn't exist today and there is a good chance technology wouldn't play such a central role in our societies. It's precisely the adoption of open standards and protocols that made it possible for all this software craze to spread throughout the world in the 1990s.

lunes, 21 de julio de 2008

Linus Torvalds on avoiding large projects.

In accordance with the principles stated in a previous post about developing in Internet time, open source projects tend to stay away from large and ambitious plans. This is not to say that they never reach the status of large projects, of course. However, it does mean that they usually start as small, little projects that try to "scratch an itch" and, given enough interest and a good amount of contributions, may grow to something as large as the Linux kernel, GNOME or KDE. These ideas were pretty well expressed by Linus Torvalds in an interview published by Linux Times June 2004 where they asked him if he had any advice for people starting large open source projects.
"Nobody should start to undertake a large project," Torvalds snapped. "You start with a small trivial project, and you should never expect it to get large. If you do, you'll just overdesign and generally think it is more important than it likely is at that stage. Or, worse, you might be scared away by the sheer size of the work you envision. So start small and think about the details. Don't think about some big picture and fancy design. If it doesn't solve some fairly immediate need, it's almost certainly overdesigned".

(Rosenberg: p. 174)

Once more, the emphasis is on a project that's small, releases quick and nimbly responds to the users' feedback. This mindset is at the very core of the open source development model. Commercial software companies may stress that their approach is quite different, refusing to treat customers as beta testers. However, this is quite disingenous. Anybody who has gone through the experience of running the first public release of any software product knows what I mean.

Developing software in "Internet time".

When it comes to software development, something changed in the last 15 years or so. Sadly, we have not figured out the way to write software without bugs, neither have we come up with a way that allows non-programmers to put applications together. However, what has definitely changed is the time our programmers are allowed to sit on a project before they release the code. Let's put it this way: code has become a big business, which means that the time that can be dedicated to the development of a particular product has now shrunk significantly, since there is a need to monetize it as soon as possible. And what changed all that? It was the Internet and, above all, Netscape. Michael Toy, ex-Netscape employee, has an obvious bias for developing software in what has become to be known as Internet time:

"... I frankly admit that I am heavily biased toward: Let's ship something and put it in people's hands and learn from it quickly. That's way more fun, way more interesting, and, I think, a much better way to do things than to be sure that what we're doing is going to last for ten or fifteen years. It looks like Chandler is trying to be very architecturally sound and to be almost infinitely willing to delay the appearance of the thing in order to have good architecture. It's Mitch's money, so he can make that trade-off any way he wants. And it could be that that willingness to go slow is going to pay off hugely in the future. But it's really hard for someone who wants to ship software next week."

(Rosenberg: p. 171)

It's the very same approach taken by open source development: release quick and release often, then make changes depending on the users' feedback. This is quite far from the original intent to reach perfection that was shared by pretty much every original software engineer. Back then, the aspiration was to "write software like we build bridges". I'd say the overall attitude has changed now. We don't think we can build the perfect program anymore. We prefer to think of it all as a process, not an end product. Even better, it is a process that never ends. We put together a prototype and release it out there. Users give it a try and provide some feedback —some of them may even provide code too! Then, we take that and build upon the original prototype. In other words, software development is less software engineering —building a definite product from the well defined plans— than a craft or an art always trying to perfect itself by the use of a set of so called good practices. This is the hige shift in mentality brought about in the field by Internet time.