Finally answering the question, “What are students really learning in school?”

The first video below is from 2021. It helps answer a question I’ve had for a long time re. what I’ve seen go on in schools and universities for the past 12 years. I used to have the impression that what I was seeing re. an anti-intellectual atmosphere was just certain classes, at certain schools. Over time, I could see this was not true, but I wondered how that could be, since I still held the assumption that the academic subjects were being taught to these students somehow. Yet, I couldn’t reconcile that. How was it possible that they were being taught English and history, and the vocabulary of Ivy League students was like that of a fourteen-year-old? Or, how come they were unable to discuss history without making it a racial argument that had the same anti-academic quality? It just was so dumb.

This discussion with educator Deb Fillman gives a window into what’s actually been going on in public (and private) schools, and for how long. The short of it is that the “education” many students have been, and are getting is not academic in nature. It is politicized, based on particular Marxist education theories.

An ironic note made in this helped fill in my understanding of why when I talked with a parent in CA, with whom I’ve corresponded online, he kept bashing constructivism in math education. His understanding of it didn’t match with mine, but his matched with what had been taught in the schools. The way constructivism had been used in the public school system was discussed here, as “Just let students discover the math for themselves.” In other words, don’t teach them anything. Don’t give them any guidance. They’ll “discover it” themselves. Math had been “taught” this way for many years. One panelist made an interesting historical point that the Common Core math curriculum was actually a response to this, to get the curriculum back to “methods for solving problems,” rather than leaving the “learning” of math open-ended. Though, it, too, had lots of problems! Out of the frying pan, into the fire, so to speak; or the blind leading the blind.

It seems to me the Common Core curriculum was a compromise. Addressing those who were against rote learning, it didn’t just teach one method, but it still insisted that each method be followed to the letter (ie. the work would be “wrong” if the student gave the right answer, but their methodology didn’t match the prescribed method they were supposed to be learning at that stage).

While I’m on this subject, I thought I’d include a 2022 video from James Lindsay on what is the dominant classroom pedagogy in our school system, since it goes into more depth re. why the method described in the video with Fillman is the way it is.

Edit 3/16/2026: I thought this discussion from 2024 was illuminating, as well, re. how school teachers are being taught.

Rhyen Staley discussed his report, “CorruptED: Colleges of Education and the Teacher as Activist Pipeline.” The report was published by Defending Education.

Related articles:

Interpreting Tron: Ares

*** Warning: This post contains spoilers ***

I guess since it’s been so long since “Tron Legacy,” there really isn’t much continuity between it and this movie. There’s a reference to characters in “Legacy” in the opening sequence, and a few notes re. “Flynn Lives,” but that’s it.

I wasn’t expecting to like this sequel. The previews showed computer characters and vehicles coming into the real world as a major theme, which I thought “jumped the shark” on the Tron world concept. I think Disney managed to pull this off without straining credulity too much (though, the light trails felt too unrealistic, but since this is a Tron movie, I guess that was too hard to resist). Strange, but I had the thought that this movie used a concept similar to one that was used in an ’80s sci-fi action series, “Automan.” The series used the idea of AI, “holograms,” and “force fields” bringing computer simulations into the real world, helping a police force fight crime.

The show had nothing to do with “Tron,” but it had people who’d worked on the movie work on this show.

It seemed to me that “Tron: Ares” was combining the concepts of AI and 3D printing (except making the printing process much faster). Like with “Tron Legacy,” the computer characters are best viewed as symbolic.

I was a bit confused at the beginning of the story, because it had the Dillinger family leaving Encom, and forming their own company. In the story, both companies were working on the same problem: How to make materialized digital systems last longer, except Dillinger went ahead with producing physical systems as his main product. Whereas Encom was still focused on video games. An executive at Encom, though, named Eve, pursues “the permanence code,” created by Flynn, that makes materialized system last a lot longer. Dillinger finds out she’s found it. He wants it, and this creates the action that starts off the story.

I felt a strong pull towards “Blade Runner,” a movie, which incidentally came out the same year as “Tron,”; particularly the character of Roy Batty, when he meets his creator, Tyrell, saying, “I want more life.” Replicants were given a short life span, because they couldn’t handle memories and emotions. The point being, though, they were slaves. They were forced to do jobs that humans didn’t want, or were too dangerous. In the case of “Tron: Ares,” the short lifespan seemed to have to do with a limitation of 3D printing. It jogged a memory of something I heard many years ago about how 3D-printed objects deteriorated in months, but this was fine, because they were really only supposed to be prototypes. It seems like this has changed. Doing a little research, I’m reading they should last months to years, as long as they’re well cared for.

Anyway, in the story, materialized systems last 29 minutes, and then rapidly degrade.

A point that keeps being made about Ares, and another AI character named Athena, is that they’re at the whim of Dillinger. Though they have intelligence, their only purpose is to do what they’re told. Ares departs from this right from the start, asking, “Who am I?” Dillinger answers, “Not who…but what.” A “who” has an identity; a will of its own. A “what” has an identity, but it’s not distinct, and does not have a will. The will comes from the user of the object.

This intelligence is curious about the world it encounters. It finds meaning in the world, and that meaning creates a reason to live a life. Ares feels like he’s imprisoned, only following Dillingers directives, and increasingly breaks faith with him, because he doesn’t like being used. Athena lives and dies for the directives. She thinks it’s the whole point of her existence.

What interested me is the movie kept going back to really old microcomputers for finding this “permanence code.” One could interpret this as a nostalgia gimmick (which, let’s face it, it was, along with some other things), but I couldn’t shake the idea that the producers were making a point. What the movie seemed to say was these old systems contain “knowledge” and “heart,” perhaps because doing things with them was kind of difficult. You had to work at them to get what you want. You couldn’t just ask for it. The movie even seems to make that point, showing how Eve and her assistant stay a few months at a lab in the remote climes of Alaska, searching for the “permanence code” through multitudes of old floppy disks, and writing code analysis programs. It seems to say our modern digital world could better itself by not making things too easy.

There was also a very strong Pinocchio reference (one of the characters even says this explicitly), but the movie takes a twist on it. It doesn’t say that Ares becomes human, but that he becomes mortal (he can only live one life), and that mortality is liberating, because he’s no longer a slave.

Even though Ares is cast as a “computer character,” I interpret his story as a message about us: That our lives are not just about following directives. Our life and death should not be at the service of overlords who use computer systems to create distance between us and them, and to assert their authority, since the distance creates a lack of accountability. Though, in the story, accountability eventually comes, one way or another.

The hierarchy that’s set up in the story is that Dillinger is the master, Ares and Athena are his agents, the rest of us in the real world are at the effect of both, and the symphony is oppressive, and destructive.

Though, the character of Ares could also be interpreted as going back to an old philosophical question, that if AI becomes sentient, then what ethical obligations do we have toward it? Do they have rights?

The movie lightly touched on the dangers of AI in our present, that these systems can access and use so much information about us that we lose the privacy of our most intimate thoughts and feelings. It addressed this only in emotional terms, which I think was a missed opportunity, because it could’ve dramatized how that level of intrusion enables a different kind of slavery.

I haven’t seen it, but that theme seems to be addressed in a dystopian sci-fi movie from 2017, called “The Circle.” It talks about the danger of easily searchable, and pervasive information collection on everyone’s lives.

The ending to “Tron: Ares” returns to the theme of the “virtues of the old.” Ares sends a thank-you note to Eve for liberating him, but even though he could easily send a text or DM by phone, he takes the time to write and send a postcard. An ironic note, given the world he came from.

The prior Tron movies were warnings about our future. I think this one was largely a warning about our present. I’m biased here, but I think it calls on the audience to retrace our technological steps, and re-evaluate where we’ve come.

Related article:

Exploring the meaning of Tron Legacy

A good history of NASA’s Mercury program

Jackson Tyler produced another really good documentary, this time on NASA’s first space program, Project Mercury.

I figure most people, if they’re conscious of NASA at all, have a perception that it started with the Apollo program, to go to the Moon. Mercury tends to be less well-known. Apollo had more of a world impact, because of how people all over the world got interested in the Moon shot. While Apollo was about the “space race” with the Soviet Union (winning it), I think it became in the public’s consciousness something much more profound, a perception-changing moment re. our planet, and our place in the Universe. Mercury was much more about the “space race” in the public’s consciousness, and probably for that reason has been easier to forget.

If you’ve seen the movie, “The Right Stuff,” based on the Tom Wolfe book by the same name, you’ve gotten a taste of what Mercury was about. Though, I hear the book tells the story right, and the movie makes an utter hash of it. (Mercury astronaut John Glenn hated the movie, calling it a “charade,” it seemed mainly for how it portrayed the personalities of the astronauts.) Nevertheless, looking at this documentary, you’ll have moments of recognition with the movie. I remember the director of the movie “Apollo 13,” Ron Howard, talked about this, since the astronauts from that mission consulted with the production in the making of the movie, and they complained about how they were portrayed in parts. Howard explained that some dramatic license was necessary to get across certain moments in the story. He seemed to say, “Or else the audience won’t understand what went on.” That may be true in “The Right Stuff,” as well. I remember listening to Jonathan Stamp, a historical consultant who’s worked on many dramatic productions, talk about how historical dramas have to do some things with history that didn’t happen, because otherwise the audience would be lost, since they lack the necessary context for it to make sense. Though, there are plenty of times when the producers or screenwriters insert things into historical dramas for more personal reasons.

The movie doesn’t tell the whole story. It skips over some of the missions, and other events that happened in the world around them. This documentary provides a very complete picture, covering what led up to Mercury, then covering the highlights of all of the missions, and what was going on with the Soviet program at the same time.

Whenever I see Mercury covered, I’m always keen on learning about Scott Carpenter’s mission, because he came from Boulder, CO, where I live, and went to Boulder High School, where I went to school. Like with me, Boulder was his adopted home town. For anyone interested in Carpenter, the coverage of his mission starts at 1:47:14.

For people who’ve seen “The Right Stuff,” I feel like covering this subject wouldn’t be complete without a recreation of Chuck Yeager’s test flight of the NF-104A, made by YouTuber J.P. Ferré: DCS: THE RIGHT STUFF – Short Film (2022).

This test flight was dramatized near the end of the movie, where it shows Yeager “trying to fly into space with a military jet,” seemingly for his own “personal space mission, but like Daedalus, flew too high, and fell back to earth.” What struck me, seeing this recreation, was that while the movie hit the high notes of what happened on this test flight, it really missed the mark in some ways. For one thing, Yeager didn’t do this on a whim, as was portrayed. It was a planned test flight. What also isn’t portrayed was that the particular plane he flew was designed for this. It had a rocket booster on the tail, to give the extra thrust needed to get into the high atmosphere, and thrusters in the wings and on the nose for controlling the aircraft once the air density got too low for the control surfaces. Anyway, this gives you a better idea of what really happened.

Related:

A history of the Voyager mission

Forming OOP from non-OOP stuff

I have been asked on a couple occasions about how to do this, and I get the impression that the answer I give doesn’t satisfy the people asking, because I basically tell them, “You’re trying to do it wrong.” I try as succinctly as I can to give them some ideas on how to do it, but I get the feeling they hear me like I’m an alien, because it’s not what they expected.

Abstract data types are not OOP. I know that message is confusing, because that’s how I felt when I heard it. I thought that’s what objects were, because that’s basically what I was told for years, by multiple sources that I thought knew what they were talking about.

For those who need some catch-up on this, I wrote a post a while back on the subject: A collection of (Quora) answers on object-orientation.

Those who understand Smalltalk might think this subject is obvious, because Smalltalk is built from non-OOP stuff. The point of what I’m talking about is I hope to address those who don’t want to rebuild Smalltalk, but want its architectural advantages.

A project I’ve heard Alan Kay reference a couple times is one he consulted on more than three decades ago, which he’s called Brooklyn Union Gas. The name of it was CRIS-II. It was an enterprise system that Brooklyn Union Gas designed and built themselves, with help from Alan, while he was working with Andersen Consulting, and they built it on the model of Smalltalk. However, they built it completely using standard tools that were on IBM mainframes at the time. If this needs to be said more explicitly, they didn’t create an OO language. Instead, they created an OO architecture from these standard system tools. Their system used all of the facilities: operating system, a programming language, an RDBMS, and transaction management as the programming environment upon which the application logic was based. Each tool was used as given by IBM, by which I mean to say it does not seem to me they “opened them up,” and modified them. Instead, they used them in a novel way.

If you look online, you can find some biographical references to articles on this project, but probably the best you’re going to get is behind a paywall. I managed to find an article on it in the publication IIIE Software, from January 1993, through interlibrary loan. It’s titled “Object-Oriented Development at Brooklyn Union Gas.” I’m not going to reproduce the full text here (since that would be illegal), but I will summarize what I think were its salient points. The article does not give in technical detail how to do what they did, but it provides some strong hints.


“Object-Oriented Development at Brooklyn Union Gas,” by John Davis of Andersen Consulting, and Tom Morgan of Brooklyn Union Gas, IEEE Software, January 1993

Using object-oriented features, a mainframe implementation of a Smalltalk-like execution environment supports a critical application and can accommodate change.

The CRIS-II architecture is structured in three layers that are derived from the essential aspects of the underlying problem domain. The interface, process, and business-object layers address, respectively, the “when,” “what,” and “how” in the gas utility environment.

What’s in the Interface layer:

  • UI
  • batch processes
  • integration with other applications (not designed using this OOP scheme)
  • UI is not always composed of objects; has access to objects via. message-passing.
  • Events which trigger functions in the process layer.

The Process layer:

  • Is built out of “function managers”: Carries out response to events received from the interface layer.
  • Is described by a script-like control structure, which drives the system’s response to events.
  • The script is implemented using message sends to objects in the business-object layer.
  • The function-manager script describes “what” to do. Delegates “how” to the business-object layer.
  • The script sequences the use of business objects

Business-object layer:

  • Receives messages from the Interface and Process layers.
  • The “how” of this layer has sublevels. For example:
What do you do with a meter read?
   If the read is suitable for billing, then
      render a bill.
      How do you know a read is suitable for billing?
      How do you render a bill?

Successive answers to the “how do you…?” questions produce layers of functions that produce additional layers of functions within the business-object layer, beginning with the business policy and practice and ending with the technical details of the current implementation.

System tools used to build and operate CRIS-II:

  • IBM mainframe
  • MVS/ESA operating system
  • DB2 database manager
  • CICS (Customer Inquiry and Control System) online transaction manager
  • PL/I programming language

The overall system design borrowed many architectural concepts from Smalltalk-80 and Brad Cox’s work on Objective C.

Some technical details:

The system allowed untyped variables, used dynamic messaging (messages formulated at run-time), used encapsulated objects, and provided some metaprogramming, so that class information could be called up at run-time.

Characteristics

As in traditional OOP, an object in this system was an instance of a class, and implemented inheritance. All objects were descendants of Object.

Classes had both class and instance methods.

Methods were written in PL/I, and each method was its own separate executable. Methods were bound at run-time. They were known in the system by their selector name.

Classes were defined, and methods/behaviors were associated with classes in dictionaries that were laid out in entity-relation-attribute sequences. The dictionaries were loaded into memory on demand, where they could be accessed as objects.

As such, instance variables could be changed without recompiling any methods.

Objects communicated via. message passing.

Dynamic messaging (formulated at run-time, like “perform” in Smalltalk) was supported, and used frequently.

Objects used “self” and “super” tokens to send messages to themselves, and to their superclass, respectively.

Non-OO entities used were strings, numbers, and characters, which were stored as native PL/I data types.

Object model

Since CRIS-II was designed as a multi-user system, objects for each user were stored in single-thread sessions. Each thread had a Context object, which was a container housing all of the objects currently in use for a user’s session. They implemented methods for managing storage (storage allocation, synchronization, and serialization, and storage reclamation).

Context objects:

  • could find every object they stored by class.
  • did not implement garbage collection.
  • stored objects in fixed-length chunks, which optimized performance, since the storage space could be reused by different objects.
  • enabled transaction management/versioning behavior

The overall object system was largely implemented in itself, but to improve performance, object storage was implemented in PL/I.

During execution, new objects were stored in a Context object. Storage grew and shrank, depending on how many objects were being used. At times, Context objects would close gaps in their management of memory by physically moving objects into contiguous cells.

Transaction management

New objects tended to be initialized with data stored in the DB2 database.

Before initializing a new object, object persistence behaviors checked the Context object to make sure the same instance was not already present. Each instance had a unique identifier. This allowed applications to insure that the most recent versions of instances were returned.

When a new instance was placed in a Context, its state (instance variables) was saved in the database. This optimized update processes when object state was checked back into the database manager.

The system could create detailed audit trails in a standardized way. Control behaviors in the system often set up “before” and “after” values of instance variables in special Activity objects.

In many cases, application exceptions and side-effects were created when certain variables changed. An example that was used was when a service order status change might need to be transmitted to other systems. Identifying changes in a standardized way let the system abstract side-effect and exception handling into a superclass. This helped maintainers, if they needed to change the implementation of side-effects.

Multiple instances of a single user’s Context could exist. By saving Contexts at checkpoints, and reloading them, the system could rewind processes back to earlier states. A thread could also use this to pursue alternate execution paths.

The interface (UI) layer controlled the logical unit of work. When a business event completed, the interface informed its Context object. It then iterated over its encapsulated objects, sending “save” messages. Objects that had changed state serialized their state to the database.

The Context object interacted with the database in a sequence that optimized database response, and avoided deadlocks. The Context object carried out the implications of side-effects for events, and made sure current and saved states were synchronized.

When the end of a logical unit of work was reached (for example, when a company representative completed a telephone call with a customer), the Context object was sent a “clear” message to erase most of its objects and reclaim their storage space. Some objects, such as those that identified the current thread and its user, would only respond to a “purge” message.

This scheme created a rudimentary garbage collection system. It was also the largest source of bugs in the system. The authors said a real garbage collector “would have been invaluable.”

Database-manager interface

Each business-object usually had a dedicated row in a database table. However, an instance of an active object contained instance variables that represented the object’s whole-part structure. Extensions to accessors let clients of objects retrieve initialization values, even if a logical unit of work had progressed past that state.

Persistent objects were descended from an abstract class that implemented data access behaviors using SQL.

The database-manager interface was a sublayer of the business-object layer, separating application concerns from the technical details of implementing actions. This allowed easy substitution of alternate implementations.

Object navigation

Navigational messages were used to travel around the whole-part structure of an object. Each object understood its component parts and implemented selectors for returning them.

In the database, foreign keys pointed to the parent in a whole-part relation, and were accessed using SQL statements. Each object included a method for navigating to its parent, or “whole.”

CRIS-II implemented a kind of “lazy evaluation.” When an object was initialized, all of its instance variables were initialized to null. The first time a request was made for its contents, only then was it initialized with actual values.

This behavior assembled, from encapsulated objects, the needed states. It reduced coupling, and eliminated most of the data-access code.

Why Brooklyn Union Gas needed an object-oriented system

The company’s customer information system managed field-service orders, cash processing, credit and collections, meter reading, billing and general accounting functions. Each year, the system processed one-billion-dollars in annual revenues.

The company needed to replace a system that had been implemented in the 1970s. They planned to use a data-driven design with integrated CASE tools. However, this seemed to lead to the same issues as the system they were trying to replace. The complexity grew larger than they thought. They decided on a different approach, that such a system is best thought of as a collection of simulations of the business, and its environment.

Development of their new system took place between 1987 and 1989, involving Brooklyn Union Gas and Andersen Consulting employees.

Type system

The system permitted, but didn’t require types to be specified for method variables. Explicit types improved performance, but though the developers made explicit measurements of the frequency of program errors during development, they did not find that this practice reduced the number of logic, or resource access/allocation errors in the system.

With explicit types, a program may by correct, if and only if the values present in the variable are of the specified type. This condition came at the cost of design/maintenance flexibility. Nothing in the team’s measurements of the development process showed that explicit types made error-detection efficient. Rather, they found the inflexibility on design this practice imposed was harmful. A better approach was to look at types as one of many assertions they could make about variables, at certain phases of the development cycle, and one of a variety of techniques the team could use to ensure efficient debugging.

The Development Environment

The detail and particularity necessary in object-oriented applications, and the system’s scale made an extensive array of development tools essential. The authors said, “You cannot do object-oriented development on this scale with just a compiler and an editor.” Early on, they constructed their development tools based on traditional mainframe system tools, including IBM’s ISPF (Interactive System Productivity Facility – Provided what would now be called a “dashboard” interface for manipulating data sets, and program modules), DB2, and PL/I preprocessor. Other development software included a “screen painter” (which I take to mean a screen/form layout tool), a code generator, and a report-generation tool.

All of these tools worked with an expandable entity-relation-attribute dictionary that contained information about the applications that made up CRIS-II, and its environment. This central dictionary stored and managed all components for the system.

Re. the visual object environment, it provided means to browse object descriptions and methods. It provided cross-referencing of messages, so a developer could locate all methods that were senders or receivers of a message.

Often, members of the team located behaviors using a cross-class browser, which would find behaviors by partial matches of class and method names. They found this more useful than just being able to look for a class name in the hierarchy.

All team members had access to the dictionary, and used it to record design information, and changes to it. Once the system was installed into production, the dictionary continued to be used during maintenance cycles.

The dictionary was designed to maintain relational links between the interface components and the implementation components that processed event information. It also tracked system test scenarios, and the scripts to execute them. It supported code generation and testing tools, and the configuration of object behaviors.

Their experience was that only a small core of the development team needed extensive training in the structure of this object-oriented infrastructure, and methods of building and maintaining objects. Most of their application developers worked in frameworks where they didn’t have to understand these techniques completely.

Performance and reuse

They found that though message-passing is not as performant as procedure calls, the value of this structure far outweighed the costs. As Hoare would say, “Premature optimization is the root of all evil.” They found that putting too much emphasis on low-level optimization actually decreased the system’s overall performance. As an example, late-bound messaging led to better database-access performance, which created an overall performance gain.

They believed that performance assessments should include the consideration that maintaining tightly-coupled systems creates greater complexity in design, and all the costs that go with that, which overtakes the performance benefits of procedure invocation.

They also found that using their object scheme resulted in greater component reuse than was typical in business systems.

Lines of code by layer

They laid out the breakdown of how much code was in each functional layer:

  • Interface layer: 27%
  • Process layer: 5%
  • Business object layer: 65%

Business objects represented more than half of the total. The largest business object contained no more than 6% of the total code, showing that the functionality was well-distributed.

The process layer metric showed that this architecture let the application say in a small amount of code “what” was to be done.

They described what they called their “hybrid” interface layer (part object-based, part application-based UI) as “bulky,” but the average component size was 100 lines of code. The layer had more than 1,000 components. It was built using integrated CASE tools, which tended to generate duplicate code in each component. They thought this layer would’ve been much smaller had the components been implemented as objects that could get code reuse through inheritance.

They said it’s good to think of systems like CRIS-II like “small economies, rather than large computer programs.” In other words, an evolving system that’s built through experience.

They further said,

Such systems are so large, they actually alter their environment.

System developers must recognize that specifying more than a few steps ahead in this trajectory is simply impossible. The essential requirement for business systems is orderly evolution over time, with minimal constraints on the design trajectory.

If the system models the real environment, and the technical artifacts it needs are well insulated from each other, changes to the system are likely to exhibit ‘proportionate effort’ characteristics: Things easy to ask for will be easy to do.

They extended their entity-relation-attribute dictionary to include design items, and then developed tools that operated on that data, but said they thought it would’ve been better to use the object system itself to carry out what the dictionary did, “Our experience shows enormous benefit from having fully bootstrapped environments.”

The class description, inheritance, and messaging mechanisms were built within their object system, but they were implemented outside of PL/I.

They recommended that if object-oriented extensions are desirable in the languages that exist that programmers should think about constructing them using a language-neutral tool or system that would describe the class structure and messaging mechanism, and that they should limit language extensions to bindings to class descriptions, method implementation, and invocation. “The Art of the Metaobject Protocol,” by Kiczales, Rivières, and Bobrow shows how that might be done, as does the System Object Model described in “OS/2 Technical Library System Object Model Guide and Reference.”


Another source cited in the article, that talked about this system was “Brooklyn Union Gas: OOPS on Big Iron,” in Harvard Business School case study N9-192-144, Boston, 1991.


This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

A good summation of the VPRI STEPS project

I learned about STEPS from the ViewPoints Research Institute (VPRI) in about 2008. It’s goal was to create a usable office system, from the ground up (including the basic system software to support it: kernel, memory access, I/O, storage, networking, and graphics system) in 20,000 lines of code. This benchmark was based on the lines of code in the Squeak system, which was about 200,000. So, the idea was to reduce the code required by a factor of ten.

I read, I think, three of their reports on this project, and while elements of them were fascinating, some of which I could grasp, I had trouble understanding what their overall approach was, not because they didn’t explain it clearly. It just wasn’t in my conceptual frame.

I happened upon this video by Arthur Gleckler at BALISP (Bay Area LIsp and Scheme users grouP) from 2023, and it really helped explain the strategy VPRI used in creating what became the “Frank” system. Glecker’s presentation on STEPS is 43 minutes long. After that, he got into what he was working on, which I took to be tangential.

The point of his presentation on STEPS was to say that the project tried to leverage the power of programming language translation to lower the line count, stacking many invented programming languages on top of each other, creating a kind of “language pipeline.” Gleckler said they got a lot of mileage out of that approach, but ran into some problems when trying to make it work for conventional PC hardware.

You can see Alan Kay demonstrating “Frank” at about 12 minutes into his speech at the ACM’s celebration of Alan Turing’s 100th birthday in 2013.

One of the fascinating elements in the STEPS reports was a different object-oriented structure. Gleckler didn’t get into this, since he was using his discussion re. STEPS as a lead-in to a project he’d been working on.

Alan told me a little about it back in the late 2000s, that he’d been thinking about changing how he implemented OOP since the late 1970s. He said that Smalltalk used a “push” method of messaging (reference an object, and send it a message), and that after working on a Smalltalk project for the intelligence community in our government, which more resembled a spreadsheet, he thought about how objects could communicate in a similar fashion to spreadsheet cells, which use a “pull” action.

VPRI used a “pull” method in STEPS. To do this, they created a “world” or “environment.” Each object tells the environment what it needs to produce results, and the environment sends a message to it when the results it needs are ready. In addition, each object tells the environment what result(s) it can supply. When an object has everything it needs, it returns results to the environment, which then passes them along to the subscribers of that kind of result.

A description he used that felt impossible, at first, was that, “Objects don’t send messages. They only receive messages.” To make sense of this, it could be that the environment automatically gets an object’s “needs,” and what results it can return with messages that it sends to each object, when a new version is introduced. Or, maybe these descriptions are just part of the object’s metadata, and they’re read automatically. Either way, this would relieve the object from ever needing to send a message.

In this model, objects act passively, rather than initiating actions. Another term he used for this is a “subscriber” model.

The environment acts as a mediator and coordinator in matching results with “needs.” In addition, objects are transferrable to other systems on a network. So, system processing can be distributed.

What I remember is one of the reports described how they tested Frank in a distributed fashion, using some pretty powerful server hardware to get the performance they needed, but I always knew, since this project was described as “Reinventing the personal computer” that the idea was to make it something a wide public could use.

Gleckler explained that one of their goals was to get it running on conventional PC hardware. This was a stumbling block for the project, because by trying to optimize Frank to work faster on lower-speed hardware, this made it harder for them to lower their line count.

From my own experience, this makes sense. It’s often the case with conventional processors that optimization techniques require producing more lines of code, rather than less. One reason I know about is that branching tends to be “expensive” with regard to performance. So, processors perform better if less branching takes place, and a way to do that is to “unroll loops,” for example, where rather than having the processor branch a bunch of times to execute a loop, you “lay out” the loop with repetitive code blocks, without branching, that the processor can cache, and execute linearly. In compiler circles, this is called “inlining.” It takes up more memory–more lines of code–but it’s more efficient for the processor.

Work on STEPS ended in about 2012, and I finally saw Alan mention a couple years later about how they didn’t make their benchmark. I don’t know what the line count came to. I assume they made some improvement, since I checked in with him from time to time, prior to 2012, and he sounded optimistic.

Something I eventually understood from talking to people familiar with the project was that the one thing they didn’t work on, perhaps because they didn’t have the time/funding, or expertise on the team, was designing hardware that would run Frank efficiently. Everything they tried to do was based around dealing with conventional PC hardware.

I’m not sure if Squeak contributed to the performance problem, since I kept reading in their reports that they were trying to get away from using it. I remember reading that they were working on developing a new “substrate,” which was ironically named “Nothing” (As I remember, it came from a question, “What are you working on?” “Oh, Nothing…”), which they hoped would replace Squeak, but from listening to Alan talk about it post-project, it sounds like that didn’t pan out. Maybe, again, they ran out of time/funding.

On a few occasions, I’ve seen people ask, “Where is a version of Frank I can run?” I finally found Yoshiki Ohshima on Quora a while back. He worked on STEPS, and in a discussion thread, he pointed people to his own copy of an “as-is” version of Frank:

Frank-170908.zip

Someone in the thread said it looked like it required macOS to run. However, that would only apply to the version of the Squeak VM in the zip file. Squeak is platform-agnostic. So, all you should need is a version of the VM that is compatible with your operating system. From what I remember about the reports, the Frank code was all in the image.

It looked like some “upgrade” setup was also necessary, at least with the (Mac) version of the VM in the zip, because it was 32-bit. Macs haven’t supported 32-bit for many years. The out-of-date VM in the zip is called “Cog.” Reading up on that, Cog was a Squeak optimization project that is now the standard Squeak VM. So, my guess is using a current version of Squeak should work.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

Forth: A beginning?

I searched for many years for something that felt like a baseline, that I could present as a basis for learning and exploration re. system programming, and system design. I got into the Forth programming language on the Atari 8-bit a couple years ago, using APX Forth. As I remember, what really inspired me to take the plunge was I was interested in learning how a stack machine works. I had taken up the task of building a “Parr VM” that I got working, and I did some work with that, but it felt frustrating, because I didn’t have any mnemonics to work with. I was programming in straight bytecode. I wanted something that had a more recognizable language. Forth is a stack-based language. I thought, “The best way to learn something is to immerse yourself in it.” So, Forth seemed ideal for that. I’d say it’s worked out like I expected. While it was a difficult adjustment, I’ve appreciated how simple the Forth system model is, and how expandable/adaptable it is.

The main text I used for learning it was “Starting Forth,” by Leo Brodie.

When something appears broken, I consider it a challenge to try to fix it. Most of what was in the book worked fine, but there was plenty that was documented in the book that didn’t appear to work in APX Forth. This makes sense, since this version of FIG-Forth came out in 1981, and Forth has changed some/expanded since then. It turns out there are different kinds of Forths, different standards. So, I took it as a learning exercise to try to implement some things that were missing in APX to fill in these discrepancies, and that was very rewarding.

Earlier this year, I starting working on trying to create a video series on APX Forth, since Thom Cherryhomes had made a series on it many years ago, but only covered the very basics, and then jumped into some advanced stuff you could do with it, but without showing how he did it (that series can be viewed here). I used his series as part of my learning process. I wanted to do a series that got into what “Starting Forth,” and another book called Forth on the Atari: Learning by Using,” covered, which should present enough so you can see how Cherryhomes did some of what he did.

I don’t do this to say that people should work on this same platform. I hope some will find it inspiring, and choose their own path with it. There are, of course, versions of Forth that work on modern systems. Perhaps people will find that’s a better starting place for them (I’ve been using an Atari emulator on a modern system, which you’ll see in the videos).

Edit 1/20/2026: I came upon a guide that I think will be helpful to Forth beginners in getting acquainted with APX Forth, called “Using Fig-Forth on the Atari 800,” by Stephen Cohen. A critical first step is setting up your own Forth disk. Cohen guides you in how to go about doing this. (The APX Forth manual does a fair job of doing this, but Cohen’s guidance helps with setting up a typical configuration that most will find helpful.) He then goes into talking about the Forth tools you’ll use most often, among other things.

Note: I’m in the process of creating this video series. I’m going to post what I have for now, and I will add to it as I get more parts done.

I made an error on the slides for Part 2. I was correct in saying that what was “true” in the begin-while-repeat and the begin-until loops was any non-zero value, but on the slides I said “>0”. They should’ve said “<>0” (not equal to zero).

In Part 4 and Part 5, I referred to a library I wrote (with a few words written by other authors) that brings many words referenced in the book “Starting Forth” to APX Forth. See: “Starting Forth” upgrades.

There’s an error in the slide where I talk about TEXT. I was correct in the audio. TEXT stores its captured string in PAD (the slide says it’s stored at the top of the dictionary).

Related media:

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

What really happened to object-oriented programming?

I saw a question on Quora asking something like, “Why did Alan Kay’s OOP always fail, and the OOP that was implemented as C++, Java, Delphi, etc. always succeed?” I wrote an answer for this, but the question was deleted before I could post it. I sometimes have the experience as a writer of realizing things I didn’t pull together before I wrote about it. This was one of them, and this is what I came up with.


There was success with Alan Kay’s OOP approach. That’s why languages like C++, Java, etc. were called “OOP,” because certain players in the tech industry were drafting off of that success. He’s the one who came up with the term. My estimation is that it wasn’t a dominant sales effect. Rather, the success was in what was able to be built with it; not the first graphical user interface, but one of the first. One of the first desktop interfaces, a UI that was relatively easy to use with students, when this was a rarity; where you could look up all the available entities in the system (the system was self-referential), and modify the system while it was running; where you could combine text with graphics, do animation in a fairly understandable environment, desktop publishing, and multimedia, all on a system that was years ahead of its time (the advance beyond the current state of the art was one of the objectives).

Part of the problem was about comprehension. Most software developers haven’t understood what OOP is, or why they’d want to use it. A good part of the reason for that is because of what the industry and academia adopted as “OOP,” which had extremely little to do with what Alan talked about. He also identified many years ago that a problem lies in how technologists view programming, as a “gear and cog” system, where everything is tightly bound together.

Secondly, the prime example of OOP (Smalltalk-80) was taken straight out of the lab, in its system configuration, was adapted to run on top of a conventional operating system, and was taken by software developers as just a programming language, which “for some weird reason” had a GUI. It was badly categorized by technologists.

A significant reason for this was technologists had taken as gospel a “layer cake” approach to system architecture. Most still do. It’s seen as expedient. To get from pitch to deliverable, it’s thought that there are certain parts of computer systems that just have to be off-limits, because it’s too dangerous to go mucking around in them, and too time-consuming to contemplate, but as Alan said many years ago,

When you think programming is small, that’s why your programs are so big!

(See Redefining computing, Part 2)

To put this quote in context, it’s also valuable to take in his broad view of programming (from an e-mail to me):

… to me, most of programming is really architectural design (and most of what is wrong with programming today is that it is not at all about architectural design but just about tinkering a few effectors to add on to an already disastrous mess).

From what I’ve researched, there were several factors that put Alan Kay’s OOP in the rear-view mirror, and promoted languages like C++ and Java to becoming “OOP” in the tech industry’s consciousness.

  • Smalltalk was at the time resource-hungry, particularly with memory, which was much more expensive than it is now. C++ was resource-hungry, as well, but not nearly as much. Most microcomputers in the 1980s didn’t have enough memory to run a full Smalltalk system. A few pared-down versions of Smalltalk were created to run on Apple II’s, and DOS PCs. A full version was available for the Apple Lisa 2, but that cost several thousand dollars each (about $15,000 today). However, the free memory that was left didn’t allow for applications using these implementations to acquire much functionality. Expensive, top-of-the-line micros that came along in the late 1980s and early ’90s could get something substantial done. I’m thinking particularly of the Apple Mac II, and its descendants.
  • Smalltalk was sold commercially, almost exclusively, and it was expensive. Other languages were also sold commercially, had a lower price, and there were free versions of them that new programmers could learn to use. There was Gnu Smalltalk, which was free, and was available in the early 1990s. It was the only free version I knew of until I found out about Squeak (released in the mid-1990s).
  • C++, and later Java, had an architecture that felt like a natural fit for graphical interfaces, when those started becoming popular on micros in the 1990s. Smalltalk had that, too. It’s where the model-view-controller (MVC) pattern was invented. Smalltalk had some popularity in the early ‘90s. However, it was more expensive than other programming environments, and it was difficult for most programmers to understand, since they had been trained on procedural, early-bound languages. Since it ran on a VM, just like Java would, it likely ran slower than C++, on the already-slow hardware.
  • When the internet became popular in the mid-90s, commercial Smalltalk vendors, who were the primary distributors of this system, saw it as an external resource that Smalltalk might access, but they didn’t think it should be adapted toward what the internet made possible. They still saw it as a desktop system. Sun, on the other hand, embraced the internet with full gusto, with Java. C++ was a popular language used for developing web applications (along with Perl), when such things were a new idea.

One gets a sense, looking at this history, that a significant reason “OOP” became associated with C++ was because it ran better on less powerful hardware than Smalltalk could. Most programmers didn’t encounter real OOP, because not much practical could be run with it on the low-powered hardware they used, but it could with C++.

Even after the hardware became capable of running a full Smalltalk system, the focus of technologists’ attention became the internet, and the companies selling Smalltalk didn’t want to have much to do with the internet. This didn’t change until the early 2000s, it seems thanks to commercial Smalltalk vendors losing a lot of cachet and market share, and the FOSS Squeak version of Smalltalk gaining popularity among a younger generation of Smalltalk programmers that had no qualms about using it for the web. In “tech-years,” though, Smalltalk arrived at the internet party well after it was over.

Since follow-on languages, like Java, adopted a similar style to C++, the “OOP” label was extended to them.

One way to look at it is Smalltalk was the victim of bad timing, but this oversimplifies the issue. I think the larger one was nearly everyone missed the point. I include myself in that criticism.

As I’ve talked about before, OOP is really a method of system organization for semantics. Smalltalk was developed in a lab (Xerox Parc) with the idea of experimenting with this in a simulated environment, which included programming low-level computer system functions (though a significant amount of that was done in microcode). Smalltalk was used as the test bed for these experiments. The researchers hoped that those implementing commercial systems would learn lessons from those experiments on how to design system architectures that would scale software in size and breadth with stability. Instead, when Smalltalk made it “into the wild,” the way this effort was interpreted was to freeze the 1980 release of Smalltalk in amber, and take it as gospel.

I don’t think technologists understood what OOP was trying to achieve. If they appreciated it at all, they tried to see how useful its last released architecture was toward what they were tasked with building. Not enough effort in the tech sphere was dedicated to learning about what it represented, and continuing with these experiments in system architectural design, which could have one day supported an engineering approach for building object-oriented systems, such that a broad population of software engineers could understand it.

When I think of this history, I often reflect on the failed effort in the 2000s to develop what was called the semantic web. Alan’s notion of OOP is really a semantic networking model that was simulated in Smalltalk.

When one reflects on this, it leaves a big question mark about how a software engineering approach could be achieved using this organizational method. I’ve read Alan talk about a software project he consulted on with the Brooklyn Union Gas Company, where it was used. The project was documented in 1993: Object-oriented development at Brooklyn Union Gas.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

Re. software engineering: Seeing the wheel

Many years ago, I heard Alan Kay talk about software that’s commonly used as being “a broken wheel.” I had trouble understanding what he meant by that, but answering a question on Quora recently helped clarify what I think is the realization that programmers talk about, “when you learn Lisp.” It is a kind of “wheel,” and what most programmers have been doing is creating “broken” versions of it.

I say this from the standpoint of someone who worked on a minimally complex client/server software project in the mid-1990s. It was what I’ve heard other software engineers call a “data-driven system.”

It had what I even regard today as an odd design. It used a relational database to store and retrieve executable code, for a non-database execution engine. This alone likely won’t seem odd to many software engineers, because surely there are development projects around today that have used this scheme. What was odd was the code was arranged in the database tables like a parse tree; a decidedly non-relational structure (at least as far as a relational database is concerned). It had what we would call today bytecodes, stored in the database. Not only was executable code stored in it. The application platform the software house had created was for data entry. A reason it was created was the client application ran on wireless Intel-based tablets (big hunky things at the time), and the only OS that could run on them was MS-DOS, or Windows 3.1. They didn’t have a lot of memory. The primary purpose of the platform was to prompt the user with forms to fill out. The data they entered went into the same database (in other tables, away from where the application code was stored, but with back-references to the forms). The flow of the application was directed by the code in the database. It would test values for validity before going to the next form, and would at times show a different follow-up form, depending on the input.

This code was run by an ad hoc VM that would iterate through the code in the database, present forms to the user, as specified by the code, collect the input values, and store them. This data would then be transmitted to, and received from a server, according to a schedule.

An incident came up where we wanted to add a feature to forms, a different kind of control, but the data collection structure we had would not support it. The big problem we had with trying to add the feature was we didn’t have any analysis tools that would help us reason about the necessary changes we would have to make, to make it work. We found it really difficult to try to think about what would need to be changed. Our system also didn’t have any operations that would help us modify the interpreted application. We just had a “brute force” tool written in C that inserted the bytecode for the whole application into the database (and a problem with this approach was, like any large piece of software, it also developed bugs…). So, we ended up not adding this feature.

We had a problem where the customer hated the form interface we had, and nothing in the VM made it easy for us to change that. The whole point of the design of the system was to create a candidate for acceptance, but the structure of the system had no flexibility to help us change it to the customer’s satisfaction. The company had painted itself into a corner. The only solution that was thought valid was to ditch the whole platform, and rewrite the client application in C, for Windows 3.1.

I’ve looked back on this as a tragedy, because a lot of effort was put into the platform, and I liked what they were trying to achieve. Though, of course, any system developed for customers should serve their needs, and the effort carried out by the company up to that point was inadequate to that goal.

What I saw, once I spent a good amount of time working with Lisp, many years after this, was that aspects of what we were trying to do in that project were already in Lisp. Further, it had capabilities that we didn’t have in the VM the company had created, because we didn’t think to add them. We had no concept of what they would’ve been. We didn’t know they were possible, and we didn’t know why we would need them.

This background came into focus for me with the following Quora answer I wrote to the question:

Why do some developers say LISP changed how they think about programming?

After learning Lisp, I came to understand Philip Greenspun’s Tenth Rule:

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

It’s a realization that many complex software engineering projects are actually trying to achieve what’s already in Lisp, but doing a bad job of it. To make an analogy, they’re making things that sort of resemble “wheels” (broken ones, at that), which you can recognize if you squint real hard, but they don’t realize they’re doing that, because they don’t know what a “wheel” is. In fact, showing one to them would seem pretty alien. I don’t say this to belittle them. I was once in the same boat. The difference is I took some time to understand it.

I kid (a bit)…

Image

The point isn’t that we should all be using Common Lisp. It’s that Lisp has something important to teach us re. what programming is about. Not all of our programming tools do a good job of allowing that to be expressed. That doesn’t make them bad tools that shouldn’t be used at all, but it has caused me to re-evaluate how they should be used, or at least what can be expected from them.

I also wrote a follow-up answer that goes deeper into this issue, with the question:

Why do some developers consider LISP more of a “way of thinking” than just a programming language?

For me, that’s best expressed by what Alan Kay said about how Page 13 of the “Lisp 1.5 Programmers Manual” is the “Maxwell’s Equations of Software.”

A Conversation with Alan Kay

SF If nothing else, Lisp was carefully defined in terms of Lisp.

AK Yes, that was the big revelation to me when I was in graduate school—when I finally understood that the half page of code on the bottom of page 13 of the Lisp 1.5 manual was Lisp in itself. These were “Maxwell’s Equations of Software!” This is the whole world of programming in a few lines that I can put my hand over.

I realized that anytime I want to know what I’m doing, I can just write down the kernel of this thing in a half page and it’s not going to lose any power. In fact, it’s going to gain power by being able to reenter itself much more readily than most systems done the other way can possibly do.

All of these ideas could be part of both software engineering and computer science, but I fear—as far as I can tell—that most undergraduate degrees in computer science these days are basically Java vocational training.

The Lisp 1.5 Programmer’s Manual

On Page 13, pay attention to the definition of evalquote, and that of the functions it uses. McCarthy explained these definitions on Page 14.

This way of thinking doesn’t only have to exist in Lisp. Alan’s development of Smalltalk at Xerox Parc was partly an effort to improve on Lisp, since he saw so much of Lisp programming was dependent on special forms, which changed the normal evaluation scheme. This led to inconsistent semantics that he thought was not clear to the programmer. He wanted a way in which semantics could still be changed, but could be better expressed and understood. Whether Smalltalk achieved that is debatable.

It seems like the advance with Smalltalk was in the ability to add keywords to the evaluation process, to I guess more clearly express how arguments will be evaluated. There were some other goals in the mix, as well.

On this matter, it was a contrast between, for example, in Lisp,

(if (equal a 1) (grow joe 100))

or

(cond ((equal a 1) (grow joe 100)))

where the normal expectation for evaluation is that all of the arguments will be evaluated first, followed by calling the function, which doesn’t operate in these examples; and, in Smalltalk-72,

if a = 1 then joe grow 100

or in Smalltalk-80,

a = 1 ifTrue: [joe grow: 100]

where the spirit of metacircular evaluation is still preserved.

A point he made to me many years ago was that “most of programming is architecture,” and that this is something most programmers (not to mention software tool makers, and software development houses) often don’t realize, to our detriment.


This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

On taking first steps into a new orientation

This is a follow-up to my post On improving a computing model.

My posting to this blog has slowed down a lot over the last few years. One reason for that is I’ve noticed some significant problems in “our world” that had been off my radar for many years, and I decided to carve out a significant amount of my attention to them. Second, I’d reached a point where I felt like my reorientation project was largely done, and it was time to put up or shut up, in a sense: Get on with trying to learn how to move in this new direction I’ve wanted to go re. computer science. So, I have been devoting my attention to learning some things in-depth, rather than writing about them.

For me, it’s meant learning basic things that have been implemented at the system level inside computers, by building my own versions of them.

A significant part of this has been getting over feeling intimidated by the low level in which I’m working. The way I’ve gotten past that is to work with a simple machine.

To relate this, assume you’re like a first grader (to make an analogy), learning about your world. You want simple things you can manipulate and use, not too complicated, with the assumption that as you learn and develop, you’ll outgrow them, and move on to more complex things. For me, what’s felt most comfortable, to start out, is going back to an Atari 8-bit computer I used when I was a teenager. It’s an architecture with which I had some familiarity (this helps), and it’s felt like a challenge to get to know it in-depth. So, I think I’ve found a good balance between “comfort and stress” to promote the learning I’m trying to achieve.

I’ve found that doing this has also helped a lot with learning some new ways to think about programming that would’ve been difficult for me, otherwise. The main reason for this is that interesting, thought-provoking, and useful (but obscure) programming languages running on modern systems have so much that can be “stepped on.” They have so many ways they can be “broken,” because they have so much that’s already been developed in them, and the system you run has so many interdependencies, that you feel like you’re walking on eggshells trying to develop one’s knowledge of basics. There’s the sense that “the basics are already done. You just have to learn how to use them.” The truth of the matter is a significant reason for this is that so many people in the ecosystem for these languages are not students doing what I’m trying to do. They’re essentially engineers who have some goals in mind, and they see the programming language, and the library around it (which is most of what makes these languages function as expected) as a toolset with which they need to become proficient. In short, they’re designed for professionals. That’s not a good environment in which to ask questions, and experiment, particularly for someone new to this realm.

What I’ve really liked about working in similar languages on the Atari (and whether one does this on an Atari, or a different system, is irrelevant) is that the designers of them were forced to keep things simple, and uncluttered, because they had a slow processor, and a small amount of memory to work with. Regardless, this has had the effect of creating an open field, where you don’t have much you can “step on,” because there’s so little that’s already there. I’ve found this has been a very good environment in which to learn about these languages, because I can experiment without fear that I’m going to break something.

I’m sure back in the day, the thinking was these simple, small environments were meant for kids, that could be used by educators for some purpose. The features that are “pushed forward” in the documentation seem oriented toward that. Nevertheless, I can see from working with them that they are not put together like “kids toys.” They are competent, faithful implementations of the languages that can, at least in principle, do complex things, without having to hack them. The only real limitation is the amount of memory they can use. I have occasionally bumped up against that, which has prompted me sometimes to look for alternatives to the “simple machine.”

My point in talking about this is a significant part of going forward in a new direction is recognizing where you’re at, in terms of skill, and working with what works for you, psychologically. Find an environment that works with what is most conducive to you feeling like you can move forward. I think starting with what for you is a simple environment is the best, whatever you find that to be; perhaps using something like a Raspberry Pi, or an Arduino. Or, there are those who seem to feel, like me, that going retro is most comfortable, using old 8- or 16-bit systems, either with real hardware, or using emulators, like I’ve been doing.

I’ve plugged myself in to a community interested in retrocomputing, because it interests me, and I like the help I can get from it (I like to help the people in it, too), even though the people in it are there for a very different reason than I am.

I have wondered from time to time whether readers of this blog would be interested in me covering some of the projects I’ve worked on with the Atari, or whether that would be considered too elementary. With the exception of one of these projects, I don’t think they’d be that exciting to talk about, because like I was saying, it’s pretty basic stuff.

In terms of what I’ve used for what I’ve been describing, I’ve worked in Interlisp/65XL (a dialect of Lisp. Interestingly, a very watered-down implementation of Interlisp-D from Xerox. By the way, Interlisp-D is now called Medley), Turbo Basic (a version of the Basic programming language), 6502 assembly language, and Fig-Forth 1.1.

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.

The story of the first programmable computers (from Konrad Zuse)

Hat tip to Emmanuel Florac on Diaspora.

Konrad Zuse gave his own account of how he developed his programmable computers in Germany. The reason for the historical significance is his Z3 model was the first programmable computer, completed in 1941. He gave some technical details about the Z3; what its capabilities were, and some design considerations.

The Z3 was electromechanical, using relays. It could not store programs. Instead, it carried out programs going step-by-step through punched tape (he frequently punched programs into discarded film rolls), using a machine language he called “Plankalkül” (Calculus).

A note about pronunciation. Even though it’s common to pronounce his last name as “Zoose,” it’s pronounced “T’Sooz-uh.”

A natural question, given the time period and location where he did his work, is whether he worked for the Nazis. The answer is not directly. The Z3 was used at a German aeronautical company, since his computer caught the interest of an engineer there who was trying to solve the problem of wing flutter. It stands to reason that this work ended up contributing to the Nazi war effort in some way, but when Zuse presented his computer to the Nazis, they didn’t see a use for it.

The Z3 was destroyed in 1944 in Allied bombing raids. A reconstruction of it was completed 16 years later, which is on display at the German Museum in Munich.

Z3 – Wikipedia

He began work on a successor, the Z4, in 1942. Again, it would be programmable. He told the harrowing story of how it was moved from place to place, looking for a safe place to keep it, since bombing raids were frequent. It was ultimately kept safe in the small village of Hinterstein. It wasn’t completed until Germany surrendered to the Allies in 1945. It was the only Zuse computer to survive the war.

It was moved to Switzerland, where it was used at the Federal Polytechnical Institute (Eidgenossisch Technische Hochschule (ETH)) in Zurich until 1955.

Like the Z3, the Z4 is on display in the German Museum.

Z4 – Wikipedia

This is one of a series of “bread crumb” articles I’ve written. To see more like this, go to the Bread Crumbs page.