Close
0%
0%

CPU running Basic

Celebrating 50 years of Tiny Basic by implementing a custom micro-coded 16/32-bit CPU that executes it directly (up to 100MHz)

Similar projects worth following
TinyBasic interpreter Copyright 1976 Itty Bitty Computers, used by permission. Can Basic be directly interpreted and executed by hardware? One of the recurring ideas in computing is one of "intermediate language" that provides abstraction over myriad differences between execution platforms. Pascal P-code, Microsoft IL (now known as CIL), Java VM bytecodes are some examples. 50 years ago, this concept was used to implement a minimal, portable Basic "dialect". Whole interpreter fit in 346 bytes of TBIL (Tiny Basic Intermediate Language) code! A VM ("virtual machine", another recurring concept) could then be written for popular microprocessors of the era, and for the first (and maybe last?) time same Basic code could be executed on vastly different computers. In this project I designed and implemented a custom micro-coded CPU that executes TBIL, within a "SBC" which also contains RAM, ROM, VGA and serial I/O. See it running: https://www.youtube.com/watch?v=vVSzaxeds5I

Update 2025-12-11 - the Basic CPU now comes in 4 flavors: original and extended interpreter, 16 or 32 bit. Interpreter version can be changed during run-time, and word width during build-time, see project logs.

Homebrew CPUs are fun and fascinating - in implementations they span everything from relays to state of the art FPGAs, and in complexity and architecture they go from simplest one instruction machines to complex pipelined, superscalar processors. Where they often fail is going beyond demonstrating functionality. Writing any software for unique CPU requires much more time and different skills than designing and implementing an original CPU. This seems obvious, but  I re-learned the lesson on my own project too. As a result, few end up with even working monitor / assembler, not to mention some higher level programming language. What if we would reverse the order? Start with well known and accessible piece of software like Basic and build the CPU around it? The intermediary steps could then be used to replicate and make custom CPUs run some simple variant of Basic. Tiny Basic variant using intermediate / interpretive language (IL) - a programming masterpiece devised and implemented 50 years ago by folks in People's Computer Company - is especially suited to this because it is requires implementation of only about 40 operations / instructions in machine language of the homebrew CPU and voila - Basic will be running on it! In this project, this implementation has been done using a micro-coded machine extended with all the registers and stacks needed to execute those 40 instructions. Following the microcode, it is possible to see the implementation algorithm, as well as the power of micro-coding. Another secret goal was to see if I could maybe have the fastest running Basic ever :-) by tweaking the CPU implementation and microcode and measuring against an old benchmark.    

sys_microbasic_anvyl_32.bit

Binary for direct download (will work only on Digilent Anvyl board) - supports original and extended interpreters, 32-bit CPU

bit - 1.42 MB - 12/11/2025 at 02:14

Download

sys_microbasic_anvyl_16.bit

Binary for direct download (will work only on Digilent Anvyl board) - supports original and extended interpreters, 16-bit CPU

bit - 1.42 MB - 12/11/2025 at 02:14

Download

extended.tba

Extended interpreter from 2025 - can be assembled with https://tiny-basic-online-utilities.lovable.app/

tba - 7.09 kB - 11/27/2025 at 16:04

Download

original.tba

Original interpreter from 1976 - can be assembled with https://tiny-basic-online-utilities.lovable.app/

tba - 5.13 kB - 11/27/2025 at 16:01

Download

marquee_demo.bas

Displays "Hello world" (or any other text) as marquee on VGA (only works if the core memory is visualized through some display device)

bas - 1.08 kB - 11/17/2025 at 04:28

Download

View all 8 files

  • 1 × Digilent Anvyl FPGA prototype board (or similar) https://digilent.com/reference/_media/anvyl:anvyl_rm.pdf
  • 1 × AMD (ex. Xilinx) ISE 14.7 FPGA development tools https://youtu.be/drnEkVvZJD0
  • 1 × Microsoft Visual Studio (any version able to work with C# projects) https://code.visualstudio.com/
  • 1 × MCC - microcode compiler https://hackaday.io/project/172073-microcoding-for-fpgas
  • 1 × Text editor of choice https://notepad-plus-plus.org/

  • Same code, twice the width - 32-bit CPU

    zpekic12/11/2025 at 05:57 0 comments

    TL;DR

    I extended the CPU from 16 to 32 bits to specifically run the following benchmark, proposed by Noel's Retro Lab:

    Image

    I was most interested how 32-bit Basic CPU is performing relative to modern "retro" computers. They have fixed CPU clock, so I adjusted or interpolated results from runs seen above for comparison:

    System / CPUCPU clock (MHz)time (s) - quoted from hereBasic CPU time (s)relative performance
    ZX Spectrum Next / Z802851.273.94
    Agon Light / eZ80F9218.4321.81.051.71
    Mega 65 / GS451040.51.0470.881.19

    While these numbers favor the Basic CPU, in reality the systems above can run much more feature rich variants of Basic, with graphics, sounds, functions etc. while Basic CPU has only USR/PEEK/POKE at disposal. Still, not too bad for running an interpreter from 50 years ago on FPGA from 10 years ago. 

    Extending from 16 to 32 bits

    Most Basic variants support to 16-bit 2's complement integers. They are fine for many use cases, and arithmetic with them is reasonably fast even on 8-bit CPUs. Unfortunately, Tiny Basic has no Floating point support, so having just 16-bit integer can be limiting. So I decided to expand the CPU from 16 to 32-bits. The design goals were:

    1. No  changes to the interpreter - both original and extended interpreters can run on 16 and 32 versions
    2. Minimal changes to microcode - only where absolutely needed, for example 32-bit 2's complement integer convert to up to 10 decimal digits, while 16-bit to 6 so some loop counters etc. must be changed
    3. Changes to hardware are ok, but avoid any special case if/then if possible

    For #3, there were two possibilities:

    1. Run-time support for 16/32 bit switch. A bit like 65802 vs 65C02 - a flag or a pin flips the CPU to 32-bit mode. 
    2. Build-time support for 16/32 bit generation of the CPU itself. 

    I chose #2 as it seemed as easier implementation, and also because if I continue this project it is unlikely I would go back to 16 bit (next logical step is implementing Floating Point, which is viable only on 32-bit data / variables), and in that case all the complexities of supporting 16/32 will be present in the CPU, bloating the FPGA footprint and won't be needed right after boot into 32-bit mode.

    Parametric VHDL design

    The general idea here is to use feature of hardware description language to generate the registers and interconnections using parameters consumed during compile time. This code handles about 80% of what is needed to compile the design to be either 16 or 32 bit CPU:

    -- generics
    constant MSB_DOUBLE: positive := (MSB + 1) * 2 - 1;                            -- 31 / 63
    constant MSB_HALF: positive := (MSB + 1) / 2 - 1;                                -- 7 / 15
    constant ZERO: std_logic_vector(MSB downto 0) := (others => '0');            -- X"0000" / X"00000000"
    constant MINUS_ONE: std_logic_vector(MSB downto 0) := (others => '1');    -- X"FFFF" / X"FFFFFFFF"
    constant BITCNT: std_logic_vector(4 downto 0) := std_logic_vector(to_unsigned(MSB, 5));        -- 15 / 31
    alias IS_CPU32: std_logic is BITCNT(4);                                                                        -- '0' / '1'
    constant STEPCNT: std_logic_vector(7 downto 0) := std_logic_vector(to_unsigned(MSB + 1, 8)); -- X"10" / X"20"
    constant BCDDIGITS: positive := (MSB + 9) / 4;     -- 6 / 10
    constant CP_OFF: positive := MSB_HALF + 1;        -- 8 / 16
    type ram16xHalf is array (0 to 15) of std_logic_vector(MSB_HALF downto 0);
    type ram32xFull is array (0 to 31) of std_logic_vector(MSB downto 0);

    During build-time, parameter MSB is passed in as either 15 or 31, and then based on that various other values are determined, for example BITCNT which determines the number of steps during division, etc. 

    The above is not sufficient to create a functioning 32-bit CPU. The main problem is that the memory interface toward RAM that holds Basic code and input line remains 16 bit address / 8 bit data (64kb RAM), Basic line numbers are still meaningful only to 16 bits (due to the convention how Basic lines are stored) etc. The table below summarizes the major differences that had to be addressed with generating...

    Read more »

  • Extending Tiny Basic (to be more like another Tiny Basic)

    zpekic11/27/2025 at 00:04 0 comments

    While in it original form it can already be useful (esp. for embedded apps), the original Tiny Basic interpreter is very rudimentary and is lacking many features. During the same time, another Tiny Basic version emerged. It was a classic interpreter (no intermediary code) but had a bit bigger feature support. 

    With some minor tweaks, I was able to largely close the gap. What is missing:

    • Multiple statements on same line (but I may add it)
    • Logical operators in assignments
    • Specifying field length when printing values

    On the flip-side:

    • TB executed on Basic CPU is >5x faster than same Basic code on classic Tiny Basic interpreter on 8080 (35s vs. 197s both running at 25MHz)
    • Better handling of control codes (all ASCII control codes can be embedded in the print string using ^, including CR (^M))

    Here is the list of new capabilities and how they were implemented. Depending on the feature, changes were needed in any of the code layers (interpreter or microcode), or in hardware itself (Basic CPU)


    Basic feature
    InterpreterMicrocodeCPU
    NEWadded to parser and execute as CLEAR--
    FOR v=from TO end [STEP step]added to parser right after LET (for speed, more frequently used statements should be parsed out first). Interpreter just gets the variable name (must be A..Z, array elements not allowed), from and end value. If STEP is not given, default of 1 is pushed on stack and then then new instruction FS is executed.FS first checks if Vars_Next is populated. If yes, it means that this an iteration, therefore var = var + step, var > end must be executed. If no, means FOR must be set up with var = start, var > end. If FOR loop must be terminated, there are two cases:
    (1) pointer to NEXT exists, just go there and find first instruction after
    (2) pointer to NEXT is not set, so search for matching NEXT and then continue with case (1)
    Added CPU instruction 0x25 (FS) - there is a Vars_For field for each variable
    NEXT vadded to parser after FOR. Interpreter checks for presence of variable name A..Z (implicit NEXT with no variable name is not allowed) and then executes FE instruction.FE first ensures FOR has been executed for this variable (if not, that is clearly an error), and then puts the pointer in Basic text of this NEXT statement in the Next field. Branching back to FOR is easy because Vars_For contains the line number. Added CPU instruction 0x26 (FE), there is a Vars_Next field for each variable
    INPUT "prompt"Check for double quote before expression, and if found print out verbatim, then continue--
    multiple LET v=expr1, v=expr, v=expr...Modified LET to check for presence of comma after each variable assignment, and loop if present. --
    @(index) arrayAdded in LET command (left side) and expression evaluation (right side). This way it appears there is one array that can be used on both sides of expressions. New USR operations added:
    @(index) on left side (assign): USR(30, PrgEnd + 2* index, value)
    @(index) on right side (get value): USR(31, PrgEnd + 2* index)
    new operation in register T to evaluate address from index
    SIZE read-only variableadded parsing and evaluated as USR(29,...)SIZE = Core_End - PrgEnd. value of PrgEnd is evaluated at each warm start, which Core_End (last address in RAM) is currently hard coded. new operation on register T
    ABS() functionadded parsing and execute using already existing code path for RND()--
    % (modulo operator)added parsing and execute  through new USR(27, .., ..) calladded USR(27, ...,...) which in turn uses existing div / mod routinelast step of div also corrects the sign of remainder to be same to dividend
    THEN shortcut (If THEN is followed by integer, it is assumed to be a GOTO (e.g. IF a>b THEN 320))slight parser modification in IF statement--

    The CPU and the microcode both support the full functionality, it is just a matter of which version of interpreter is presented to the CPU:

    In the project, both are present and can be selected by flip...

    Read more »

  • Vibe coding an assembler / disassembler

    zpekic11/23/2025 at 06:30 0 comments

    Note: the online utilities presented here are in progress, but the assembly part seems to be working, disassembler probably coming over the holidays, time permitting.

    My goal is to extend the Tiny Basic somewhat, to make it a bit more powerful and/or align it with other Basic dialects. For example, this could mean addition of:

    • FOR / NEXT loops
    • INPUT statement with prompt
    • multiple statements on a line (delimited by colon)
    • support for DATA / READ / RESTORE
    • parsing of integers in hex and/or binary format
    • possibly others...

    for all of the above - beside microcode changes - I also need to modify the interpreter itself, which is written in TBIL. To do that, obviously a TBIL assembler is needed. For this part of the project, to jump 50 years from 1976 (when Tiny Basic was introduced) to 2026 (almost there!) I decided to use some AI vibe coding. There are many such platforms, I am most familiar with Lovable. I created an online assembler / disassembler tool which allows TBIL source code to be assembled (2 pass) into binary / hex / vhdl file I could use to provide as code to the Basic CPU.

    Image

    Steps to try out the assembler:

    1. Navigate to online app
    2. Download the original version of interpreter
    3. Modify the interpreter, syntax is pretty obvious (only change from documented interpreter is that I use comma between branch target and "text" strings)
    4. Copy and/or upload the *.tba file using the "Upload source..." button
    5. Click Pass 1 button (observe if there are any errors)
    6. If no errors, click Pass 2 button (observe the action log)
    7. Binary code in hex format should appear in .... Switch to "Disassembly" mode to reveal the download options in .hex, .bin or .vhdl formats

    The source code of the app is here. All changes have been done by lovable dev bot, purely through "vibe coding". I did a similar vibe coding online tool about 6 months ago and I see certain improvements. At that time, often after changes there would be a build break. When I developed this new app, there was no build break at all. 

    Vibe coding is still just .. coding

    What this means is that all the good coding advices still apply. Most notably, everything about planning and designing the app before writing the actual code. In this case the UX of the app is rather boilerplate, using a one-page layout with standard controls such as editable text boxes, file upload / download, resizable panels, buttons etc. The AI tool shines with this part (although the code it produces is still bloated and slow, requires targeted prompting to refactor it into a leaner implementation), but where it obviously needs serious prompting is the business logic. After all, there is no proliferation of TBIL assemblers from 40+ years ago it can learn from :-) I put effort to explain exactly what needs to be done for each line of source code in both pass1 and pass2 of the assembler. I also included some links for context. Therefore, when I asked it to implement assembly pass 1 and 2 it was "on rails" and fairly successful, with only minor tweaks needed. 

    The original text of the instructions I provided ("knowledge" in Lovable parlance) is below. I also asked it to extract it as markdown document file and check it in. I organized the "knowledge" into:

    • Links for more context (targeted sources as generic "Tiny Basic" would completely confuse it)
    • Concepts. Each concept is as clearly defined as possible and includes "action" when applicable (what to do when concept is encountered)
    • Steps for pass 1, organized by assembly instruction
    • Steps for pass 2, organized by assembly instruction
    http://www.ittybittycomputers.com/IttyBitty/TinyBasic/TBEK.txt
    https://hackaday.io/project/204482-celebrating-50-years-of-tiny-basic
    http://www.ittybittycomputers.com/IttyBitty/TinyBasic/
    
    DEFINITIONS USED FOR ASSEMBLY PASS1 AND PASS2:
    octaldigit
    
    octaldigit is single digit in range 0 to 7 (inclusive). Represents values 0 to 7
    
    constant
    
    constant is...
    Read more »

  • "Hello World!" demo

    zpekic11/18/2025 at 18:38 0 comments

    What would a CPU project be without a "Hello World" demo? :-)

    I originally introduced VGA display as a debugging tool to visualize content of Basic input line and program (esp. GL and IL instructions), as described here. But once display surface exists, why not use it as simple text-based screen output? 

    Tiny Basic system currently implements 2k RAM mapped to address space 0x0000 to 0x07FF. If the program is shorter than that, whatever remains can be used as "window" for example 8*64 in size, starting at RAM location 0x0600 or 1536 decimal.

    Image

    For the demo to work, 2 additional capabilities were needed:

    • Ability to access Basic RAM (classic PEEK and POKE)
    • Ability to read the font definition for the characters in the scroll, and "magnify" them by 8X so that 8*8 pixel becomes 8*8 character.

    Memory mapping

    Total Basic memory space is 64k, leaving 62k open in the system. I used the top 4k to access same character generator ROM VGA controller uses (two copies of this ROM are now in the design). This char gen has 8*8 font similar to C64, but I also added representation of control characters 0x00-0x1F which help with debugging (note CR in memory display above after each Basic statement). From the outside, the char gen appears to hold 256 characters, but the capacity is only 1k, ASCII codes 0x80..0xFF are inverse duplicates of 0x00..0x7F.

    sel_hi4k <= '1' when (A(15 downto 12) = X"F") else '0';
    memData <= pattern when (sel_hi4k = '1') else ram(to_integer(unsigned(A(10 downto 0))));
    D <= memData when ((nBUSACK or nRD) = '0') else "ZZZZZZZZ";
    
    -- Character generator ROM handy for the marquee demo
    chargen: entity work.chargen_rom port map 
    (
        a => A(10 downto 0),    -- 256 chars (128 duplicated, upper 128 reversed) * 8 bytes per char
        pattern => pattern
    );

    USR() function

    The original Tiny Basic ran on a number of microprocessors from 1970ies/80ies. To allow extensibility, each implementation of TBIL interpreter was supposed to define and implement the r = USR(a, p1, p2) call from Tiny Basic, where:

    • a - address of the native (assembler / machine code) routine to call
    • p1 - required parameter 
    • p2 - optional parameter
    • r - result

    All of the above were 16-bit values. For example:

    argument \ CPU  650268001802
    aJSR aJSR aR3=a, P=3, X=2
    p1MSB=X, LSB=Y (to be verified!)X (16-bit)R8
    p2AARA
    rMSB=A, LSB=Y, RTS to returnA, RTS to returnMSB=RA.1, LSB=D, SEP 5 to return

    At minimum, implementing PEEK / POKE was expected, but any other (such as direct reading of keyboard etc.) was possible. Only option for Basic CPU was to implement in microcode some of the most useful USR calls, and these are also used in the Scroll demo Basic program

    Currently implemented: 

    Functionap1p2rused in demo?
    Logical0 .. 716-bit word16-bit wordp1 op p2a = 3, which is logic AND operation
    PEEK820address of byte to updateN/AM[a]yes
    PEEK1621address of word (on any byte boundary)N/A256*M[a]+M[a+1]no
    POKE824address of byte8-bit value to write to memory addressed by a (upper 8-bit of the value is ignored)p2yes
    POKE1625address of word (on any byte boundary)16-bit value to be written to a in big endian representationp2no

    To save on the "switch" statement that would take precious microcode, all binary logic operations are implemented with the same expression, controlled by 3 lowest bits of parameter a. 

    (omitted)            
    when T_binop =>
        -- S    operation
        -- 0    T NOR R
        -- 1    T NOR /R
        -- 2     /T NOR R
        -- 3    T AND R
        -- 4    T OR S
        -- 5    T OR /S
        -- 6    /T OR S
        -- 7    T NAND R                
        T <= S2 xor ((S1 xor T) nor (S0 xor R));
    (omitted)
        
    -- masks for T_binop
    S0 <= (others => S(0));
    S1 <= (others => S(1));
    S2 <= (others => S(2));

  • Chasing performance

    zpekic11/17/2025 at 06:21 0 comments

    I was really curious how the Basic CPU will perform in comparison with classic CPUs of the home computer era and put some effort into optimizing the design with that goal in mind. Based on the benchmark tests, the goal has been only partially achieved. While in some cases the speed up of factor 4 to 8 looks noteworthy, my suspicion is that those Basic interpreters by default work with software - implemented floating point for the benchmark test (haven't explored them in depth), while other group showing more modest gain of 1.5 to 2X are "integer Basic" implementations, and a more fair comparison. 

    CPU performance optimizations I used:

    Clock frequency

    A bit of "cheating" going on here - whole design is inside the FPGA which allows it to work at maximum available hardware clock frequency of 100MHz. Most importantly, the "core" RAM (Basic input buffer and program store) can be accessed in 1 or 2 clock cycles (so 10 or 20ns):

    writeCore:    nWR = 0, if nBUSACK then repeat else return;
    
    readCore:    nRD = 0, if nBUSACK then repeat else next;
            nRD = 0, MDR <= from_Bus, back;

    This would clearly be impossible if the RAM is outside of the FPGA, even on the same board.  Depending on the CPU clock and memory speed, one or more wait cycles would need to be added. Anvyl board has a breadboard section, so in future I may move the Basic core memory to a 62256 type device and experiment for example how many wait cycles does a 70ns vs. 120ns memory chip need. 

    Other limiting factor is I/O speed. Currently, max I/O serial speed is 38400 bps. Sending and receiving 1 byte over such channels takes at least 10-bit times, during which CPU has to wait for the ready signal. 

    outChar:    if CHAROUT_READY then next else repeat;            // sync with baudrate clock
            if CHAROUT_READY then return else repeat;

     FIFO queue on both input and output would help, but their implementation belongs to the Ser2Par and Par2Ser components, not the CPU (except for the trace serial output, but that one is only active up to 4kHz CPU frequency so not much gained there). 

    Cycle overlap

    Basic CPU is a CISC processor, and pipelining is not traditionally their strength. However a very limited opportunity of overlap is used between execute and fetch in few instructions. Note that the fetch cycle has two entry points:

    fetch:        traceString 51;                        // CR, LF, then trace Basic line number (in hex, for speed)
    fetch1:        traceString 2;                             // trace IL_PC and future opcode
            IL_OP <= from_interpreter, IL_PC <= inc, traceSDepth;    // load opcode, advance IL_PC, indent by stack depth IL code if tracing is on
            T <= zero, alu <= reset0, if IL_A_VALID then fork else INTERNAL_ERR;    // jump to entry point implementing the opcode (or break if we went into the weeds) 

    Most instructions finish their execute cycle and then branch back to "fetch" (no overlap). Few have the opportunity to execute "traceString 51" operation while in parallel doing other operations, and when done can branch to "fetch1" - this is a 1 clock cycle overlap. This could be utilized more, but tracing at the beginning of fetch cycle is very convenient debug tool, so this was an engineering compromise between 2 important goals (troubleshooting and performance).

    ////////////////////////////////////////////////////////////////////////////////
    .map 0x12;                    // FV (Fetch Variable)
    ////////////////////////////////////////////////////////////////////////////////
    traceString 36;                    // trace mnemonic
    Vars <= indexFromExpStack, if STACK_IS_EMPTY then ESTACK_ERR;     // get index (variable name A-Z)
    T <= from_vars, ExpStack <= pop1, traceString 51;        // T <= Vars(index)
    ExpStack <= push_TWord, goto fetch1;                // push onto stack, go to 2rd fetch entry point as we overlapped 1 cycle

    Parallel operations

    Basic CPU uses "horizontal microcode" with fairly wide control word of 80 bits:

    • Microprogram execution control: 5 bits to select 32 conditions, 9 bits to...
    Read more »

  • Debugging

    zpekic11/14/2025 at 02:46 0 comments

    Complex programmable logic designs are opaque. Unless this opaqueness is turned into transparency, the design and the whole project will fail. I use mainly two methods to peek into the (quite literally) little black box of FPGA:

    • Create generic components and test them in separately or inherit them from projects in which their already worked. Examples in this project that I reused (with some modifications):
    • Build into the project itself as many as possible debug features, starting with simpler (LEDs, buttons) towards more complex (serial debug, VGA) as the project progresses
    component \ visualizationLEDs, 7-seg LEDsSerial outputVGA
    serial to parallel inputLEDsEcho of input buffer during GLInput buffer hardware window
    CPU register TtraceT(); microcode subroutineDisplayed using block cursor at the locations it points to
    CPU registers BP, LS, LE, PrgEnd7-seg LEDstraceBP(); Underline cursor shown at location pointed by register
    ALU registers-traceALU(); -
    CPU return stack-displayed as indentation of each IL operation-
    Microcode execution(program counter can be displayed)-Hardware window, using symbols from symbols ROM produced by microcode compiler
    IL execution-Each IL instruction traced with mnemonic and parameters -
    Command line--Hardware window
    Basic program--Hardware window
    GOTO cacheOnly empty/used/full state on LEDs--

    Armed with the above, I was able to visualize and debug the 3 layers of code:

    1. Microcode executes TBIL instructions
    2. TBIL instructions execute Basic interpreter
    3. Basic interpreter executes user's Basic program

    Two components important for debugging merit some discussion as they are useful and generic enough for other programmable logic projects too:

    Serial Tracer

    Basic CPU has an output - only serial port which outputs a constant stream of trace data, whenever microcode includes a call to the "traceString nn" subroutine. nn is a number from 0 to 63 (can be easily expanded to 127) which is an index into an 8-byte string which will be output on this port. While the trace output is ongoing, microcode execution is waiting for it to finish (good opportunity to add an outgoing FIFO here)

    trace:    if DBG_READY then next else repeat;    // sync with baudrate clock that drives UART
        if DBG_READY then next else repeat;
        DBGINDEX <= zero, back;            // clear the serial debug output register and return
    Image

    Central part is the 512 byte ROM organized as 64 entries of 8 ASCII characters. When desired entry number is stored into the index register, the 7-bit counter resets to 0 and starts counting up, driven by the baudrate clock. Lower 4 bits of this counter are connected to a 16 to 1 MUX. This MUX drives the serial output line, by selecting the start ("space"), data, and stop ("mark") bits. The upper 3 bits select 1 out of 8 characters in that ROM entry. For extra capability, if the character stored has bit 7 set, it doesn't go directly to output, but selects 1 out of 16 inputs that tap into various values in the Basic CPU. The 4-bit hex value is converted using a look-up table into ASCII, and it sent out to trace_txd output. 

    For example, entry #2 in the ROM is equivalent to C# "string.format()" such as $"{IL_PC:X3}: {IL_OP:X2}"

    X"80", X"81", X"82", c(':'), c(' '), X"83", X"84", c(' '),            -- aaa xx:
    

    Hardware window

    The VGA controller generates a 640*480, 50Hz signal using 25MHz dot clock. The screen is divided into 80 columns and 60 rows, and these two values are fed into and consumed by "hardware window" components. They simply check if the current horizontal and vertical position of the screen pixel is inside their coordinates. If yes, they convert it to a memory address based on window size and memory base address. The resulting address is used to fetch ASCII char from memory and displayed (each window can have own background and foreground...

    Read more »

  • Lies, damn lies, and ... benchmarks!

    zpekic11/13/2025 at 19:11 0 comments

    Call to action: if you are reading this and have a working retro-computer with any CPU running Tiny Basic (esp. the version with TBIL) please run the same benchmark test and share the results here!


    Update 2025-11-27

    @msolajic also ran the benchmark on a computer very special and dear to all enthusiasts from ex-Yugoslavia: the Galaksija.

    Update 2025-11-26

    Running the benchmark in "extended" mode using FOR/NEXT loops improves performance about 3% but the data in tables below are for "original" version of the Tiny Basic interpreter.

    Update 2025-11-23 / 27

    @msolajic graciously ran the 1000-primes benchmark on some additional retro-computers. Here are the results and comparison with Basic CPU (see table at the bottom of this project log)

    As soon as the CPU started semi-working, I set out to measure and improve the performance. To be precise, I added the elapsed run timer into the CPU. It is driven by 1kHz clock (so has 1ms resolution of "ticks"). It is started when Lino register (holding the line of executing statement) goes from 0 to != (program execution starts) and stops when it goes back to 0.

    -- counting ticks (typically 1ms) while the program is running (to be displayed at the end of execution
    on_clk_tick: process(clk_tick, reset)
    begin
        if (reset = '1') then
            cnt_tick <= (others => '0');
            cnt_tick1000 <= (others => '0');
            lino_tick <= (others => '0');
        else
            if (rising_edge(clk_tick)) then
                lino_tick <= Lino;
                if (is_runmode = '1') then
                    if (lino_tick = X"0000") then
                        -- going from stopped to running, reset counters
                        cnt_tick <= (others => '0');
                        cnt_tick1000 <= (others => '0');
                    else
                        -- when running, load increment counters
                        if (cnt_tick = X"03E7") then        -- wrap around at 1000
                            cnt_tick <= (others => '0');
                            cnt_tick1000 <= std_logic_vector(unsigned(cnt_tick1000) + 1);
                        else
                            cnt_tick <= std_logic_vector(unsigned(cnt_tick) + 1);
                        end if;
                    end if;
                end if;
            end if;
        end if;
    end process;

    At the end of program execution, the value of these 2 counters (seconds and milliseconds elapsed) is displayed:

    Image

     For benchmark, I used the "find first 1000 primes" test which has the advantage of simplicity and portability. Because this version has no FOR/NEXT (I plan to implement it), the test had to slightly change and replace that with IF/GOTO.

    There are two variations of the test code:

    • Without GOSUB (proposed here, modified Basic program here)
    • With GOSUB (proposed here, modified Basic program here) - not surprisingly, it is about 20% slower across all clock frequencies.

    Below is the direct comparison with my previous Tiny Basic project. Meaningless (because it is different interpreter and CPU) but still fun:

    Clock frequency25MHz25MHzAcceleration
    Serial I/O38400 baud, 8N138400 baud, 8N11
    CPUAm9080 (implemented using Am2901 bit slices)Basic CPUN/A
    Tiny Basic versionNative assembler interpreterIntermediate language basedN/A
    Run time (s)19736.585.32

    Going back to the original article from 1980, I attempted to compare by reducing the Basic CPU clock speed to be same as those systems. 

    Clock (MHz)CPUBasic versionRun time (s)Basic CPU run time (s)Acceleration
    16502Level I Basic13469061.48
    26502Level I Basic6804531.50
    26502Applesoft II Basic9604532.12
    2Z80Level II Basic19284534.26
    2.457680C85Microsoft Basic (Tandy 102)20803665.68
    38085StarDOS Basic14383024.76
    39900Super Basic 3.05853021.94
    4Z80Zilog Basic18642278.21
    4Z80Level III Basic 9552274.20
    58086Business Basic10201825.60
    64*Am2901HBASIC+1431520.94

    As can be seen, Basic CPU is faster than all compared systems, except AMD's own HEX-29 system / CPU which was a showcase of their own bit-slice technology. Interestingly, it is also controlled by similar "horizontal" micro-code just like the Basic CPU. This CPU has been described in the classic "Bit-slice Microprocessor Design" book.

    Update 2025-11-20: with some tweaks in microcode, I improved the perf numbers above by about 1-2%. More info about perf ...

    Read more »

  • Basic CPU overview

    zpekic11/13/2025 at 00:15 0 comments

    If Basic CPU was a real IC (maybe one day it will be? :-)) the data sheet would brag the following features:

    • Fully static design - clock frequency from 0 (single step) to 100MHz
    • 64k addressable memory for Basic program and data
    • Up to 2k of code memory, on separate bus from Basic program and data
    • Microcontroller features with internal RAM, and 2 parallel 8-bit ports
    • Easy interfacing with popular serial and parallel consoles using built-in ports
    • Memory-mapped I/O allowing simple interfacing with most popular peripheral chips and devices
    • Ability to execute Basic program from EPROM/ROM for embedded applications (no RAM needed)
    • Built-in separate serial port for program execution tracing and debugging
    • Fast execution due to separated program, data and return stacks and GOTO/GOSUB target caching
    • 16-bit, 2's complement binary arithmetic capable of multiplication in 3.5 microseconds and division in 7 microseconds at 4MHz system clock
    • State of art micro-coded architecture for future improvements and upgrades

    CPU has "Harvard architecture" - its program (IL code) and data (Basic statements and command line) reside in two independent memory stores. Typically, the former is ROM, latter is RAM, but both can be ROM. 

    Here is an improvised sketch of the main CPU components:

    Image

    Implementation is mostly in one VHDL source file (may refactor later), with few subcomponents:

    • Serial tracer is not needed for the CPU operation but is extremely useful to observe its operation which helps with debugging
    • Binary to BCD conversion uses BCD adders, unlike rest of the CPU which is binary 2's complement, so it makes sense to separate it out. 
    Image

    Main components (some may merit separate project logs, stay tuned)

    Micro-coded control unit.

    (if not familiar with micro-coding, this project goes into some details, including the MCC compiler and the toolchain)

    Consists of:

    • Horizontal microcode store 512 words deep and 80 bits wide. 23 bits are consumed by the control unit (5 for IF, 9 for THEN and 9 for ELSE), 57 remaining lines control every other component of the CPU
    • Mapping ROM that translated IL op codes to microcode routine start. This is 256 words deep, 9 wide.
    • Control unit which has 9-bit wide microinstruction program counter and 4 level deep stack. 
    • 32 conditions (5-bit selection) conditions about the state of the CPU and external signals are used to control the microprogram flow.

    All of these components are automatically generated by running the 2-pass MCC compiler on the microcode source file. Deep dive into details of micro-coded control here

    Code processing components.

    Consist of:

    • IL_PC - 11-bit program counter which is directly exposed outside of the CPU to address the store containing the TBIL store. It can be loaded from 7 different sources, including the IL stack, but like most typical program counters, it is incremented during instruction fetch
    • IL_OP - 8-bit instruction register, loaded directly from TBIL store, which is its only source. Used by microcode controller to drive the mapper store, but also to contain offsets of various branch and jump instructions.
    • RetStack and RetSP - 16 level deep stack, 16-bits wide (only 11 are used) and its 4-bit wide stack pointer. All stacks inside the CPU "grow down" (towards increasing value of SP) and in all SP points to first free location. The advantage of this is easy checking of full / empty / count of used stack locations. 

    Input / output.

    • CHARIN - 8-bit input register. External hardware must present a valid ASCII code at this input and raise the "inchar_ready" signal. This signal is then used in microcode to detect and act on incoming character. Main cases are during execution of GL (get line) instruction, and to check for the BREAK character (ASCII code 0x03). It can be compared with direct value specified by microinstruction to determine which character and act accordingly. 
    • CHAROUT - 8-bit output register. It is exposed to external hardware (e.g. console),...
    Read more »

  • Microbasic CPU instructions and their execution

    zpekic11/12/2025 at 06:21 0 comments

    Microbasic CPU executes the full set of TBIL (Tiny Basic Intermediate Language) operation codes, which are described here. It is a physical implementation of a virtual machine required by Tiny Basic to interpret Basic, unlike implementations on classic microprocessors which implement the TB virtual machine in their own instructions (for example 6502 or 6800)

    It is helpful to visualize the instruction set:

    Image
    Image

    Execution of each instruction begins with straightforward fetch cycle:

    fetch:    traceString 51;                                            // CR, LF, then trace Basic line number (in hex, for speed)
        traceString 2;                                             // trace IL_PC and future opcode
        IL_OP <= from_interpreter, IL_PC <= inc, traceSDepth;    // load opcode, advance IL_PC, indent by stack depth IL code if tracing is on
        T <= zero, alu <= reset0, if IL_A_VALID then fork else INTERNAL_ERR;    // jump to entry point implementing the opcode (or break if we went into the weeds)

    IL_OP is the 8-bit instruction register which is loaded from external IL ROM (name of this MUX source is "from_interpreter"), then the IL_PC is incremented. After that, microinstruction control unit executes a "fork" instruction, which is loads the microinstruction program counter with the address of the starting routine for that instruction. Hardware that supports this:

    Image

    Looking at the content of map ROM, it is easy to recognize the start addresses of the instructions. For example, comparing first 16 words with the first row of the instruction set table it is clear that SX starts at microcode ROM location 0x00E, and that location 0x00D is occupied by the "bad op code" routine which will generate an internal error. 

    constant mb_mapper: mb_mapper_memory := (
    -- L0381@0011 0011.  0011.map 0x00;
    0 => X"0011",
    1 => X"000E",
    2 => X"000E",
    3 => X"000E",
    4 => X"000E",
    5 => X"000E",
    6 => X"000E",
    7 => X"000E",
    -- L0386@0013 0013.  0013.map 0x08;
    8 => X"0013",
    -- L0392@0015 0015.  0015.map 0x09;
    9 => X"0015",
    -- L0400@0019 0019.  0019.map 0x0A;
    10 => X"0019",
    -- L0408@001D 001D.  001D.map 0x0B;
    11 => X"001D",
    -- L0416@0021 0021.  0021.map 0x0C;
    12 => X"0021",
    13 => X"000D",
    14 => X"000D",
    15 => X"000D",
    ...

     This mapping ROM is produced automatically by the MCC compiler based on .map statements in the microcode source. The first map matches all instruction opcode patterns, meaning the whole mapping ROM will be first filled with entry to go into the badop, and then subsequent .map statements will override based on the pattern they specify. This makes implementation of op-codes for even complex processors easy (including the ones which have "prefixes" such as Z80, in which case either the map ROM must be expanded (fast solution, but needs more ROM) or condition implemented to recognize the prefix in each "overloaded" instruction call (slower)

        .map 0b????????;                     // opcode sink of last resort
    badop:    goto INTERNAL_ERR;
        
        ////////////////////////////////////////////////////////////////////////////////
        .map 0b00000???;                     // SX (Stack exchange, 0x00 .. 0x07)
        ////////////////////////////////////////////////////////////////////////////////
        traceString 15;                      // trace mnemonic
        ExpStack <= startSwap;               // R <= ExpStack(0), S <= ExpStack(0 + param) 
        ExpStack <= endSwap, goto fetch;     // ExpStack(0) <= S, ExpStack(0 + param) <= R
    
        .map 0x00;                           // SX 0 does nothing, so map to just skip
        traceString 15;                      // trace mnemonic
        goto fetch;

    Execution of each IL instruction is traced on dedicated trace serial output, if enabled. If not enabled, each traceXX subroutine call takes 1 cycle, which is ok penalty to be able to visualize how TBIL executes Basic command line and/or program. 

    The number after traceString is index into a table which is like a string format:

    traceString 51 - prints Basic line number and IL PC

    traceString 2 - prints depth of IL stack (indents)

    traceString 15 etc... prints SX (mnemonic for the executed instruction)

    For example, this is how "print 17/5" is executed in direct mode:

    Image

    As expected in direct mode (Basic line number...

    Read more »

View all 9 project logs

Enjoy this project?

Share

Discussions

Does this project spark your interest?

Become a member to follow this project and never miss any updates

Image