markus_zhang a day ago

I must admit. I'm a bit confused about DBASIC. I downloaded the zip file that contains DBASIC, hoping to find the source code. I read dbasic.hex which does say that DBASIC source code starts from Sector 9 but I couldn't find the related assembly code (or probably, not skilled enough) in p1flst.txt.

But reading through the README.TXT, which actually is a DBASIC program, gave me some laughs. Here is one:

  4480 DATA " THIS BRINGS US TO A MAJOR CONTROVERSY:  AMIGA FOLK INSIST THAT THE ORIGINAL"
  4490 DATA " AMIGA BASIC HAS TO BE THE WORLD'S WORST BASIC EVER, WHILE ST AFICIANDOS INSIST"
  4500 DATA " THAT ST BASIC HOLDS PRIDE OF PLACE..."," "
  • throwaway892347 a day ago

    Indeed, the author choice of language was often funny. It's just too bad the whole lot goes a long way towards being confusing.

    It was actually the publisher business model: the software was freeware, and actually came with its self-replication tool, which was welcome due to the unusual format of the disk (usual Atari/MS-DOS 3.5" disks would have nine 512 bytes sectors per track, whereas "DTACK Grounded DOS" disk format entailed having five 1024 bytes sectors per track).

    The money was supposed to come from the user manual. For some reason, it is also the main missing bit from the Dtack Grounded archive. A physical one was for sale on eBay some months ago (https://www.ebay.com/itm/335457799879 )... but with some regrets I passed one it.

    I spent some time last year trying to make sense out of what appears to be the main legacy of Hal Hardenberg. I was really after what can be described as one of the early JIT-based interpreters, and an incremental one at that!

    The short of this is I haven't yet found how to bootstrap a working DBasic disk from the zip file you refer to, but there's a working disk image available somewhere semi-publicly in Atari circles.

    If you're curious about the disk image, here's an ephemeral link to it: https://file.io/LRmFUj6jqyS0

    You should be able to run it in hatari version >= 2.4.1, provided you run it as an ST (and not as an STE).

    • markus_zhang a day ago

      Thanks. What really confused me was that I thought I would find structures such as a jump table for different keywords in BASIC, but instead I found a jump table for opcodes, starting from 0202A0: 000211B4.

      You mentioned this is actually a JIT-based interpreter, so I figured this is actually a VM with an IR, which uses this jump table. Still, I'm wondering where is the code for the frontend? I have written a couple of simple dispatch interpreter myself so would love to see the asm code for the frontend (not that the backend is not interesting, but simply because I have never written one in C so do not have the skill to recognize the asm code).

      But anyway, thanks for the link. I never used an ST before but will try it out. It is a bit sad that he never got enough $$ from this product and he said in a newsletter that "I don't have a business license or manuals -- most of them are in Santa Fe's municipal land fill".

      Regarding the manual I also tried to search on z-lib and libgen but sadly did not find any. But I did find some posts on certain ST forums so I'm sure it is not as rare as I think -- and the manual is most likely just for DBASIC programming, not explaining the implementation of DBASIC, so I can just use the example programs as references.

nxobject 2 days ago

Honestly, I never got a lot out of their actual homebrew guides – but their commentary on the mini/micro industry and scuttlebutt – Motorola's failings, Intel gnashing their teeth – was hilarious and perceptive in retrospect.

> RIP! BASH! SNARL! TEAR!

> These are the sounds emanating from Intel's council chambers these days. An explanation:

> If you have just returned from a year's stay on the planet Zorn, you will be surprised to learn that the iAPX 432, which had been intended to carry Intel's high performance banner, is now seriously dead. With customers angrily demanding an upward migration path within the Intel family to compete with the forthcoming Motorola 32 bit machine (and even the forthcoming National 32 bit machine), Intel hurriedly announced THEIR 32 bit microprocessor. Several weeks AFTER that hurried announcement, they decided the part number would be the iAPX 386. So far, the part number is ALL that is definite about that device,

> You see, there is one hellaceaus fight going on between the performance faction in Intel, who want to build a real computer (NOT an 8080 emulator) for a change and the compatibility faction, who have their gaze fixed on those warehouses (not one of which has burned down yet).

http://www.easy68k.com/paulrsm/dg/dg13.htm

They also anticipated in 1981 that Apple would be putting out 68k "minicomputer":

> ...there are a lot of companies who are planning to drive the PDP 11/70 out of the marketplace with $10,000 (base price) 68000 systems. It is rumored that Apple is one of these companies...

http://www.easy68k.com/paulrsm/dg/dg01.htm

mschaef a day ago

Truly from another era.

If you're not familiar, basementcat is right... DTACK grounded refers to the DaTa ACKnowledgment pin on a Motorola 68000. It's the signal that (when grounded) lets the CPU know that data it has requested from memory is ready to be read off the data bus. Systems with slow memory need to be careful that they ground the pin only when the memory has responded.

However, if your memory system can outrun the CPU, it was possible to just ground the pin and assume that the memory always responded in time to satisfy the CPU's read requests. The centerpiece of "DTACK Grounded" was a set of Motorola 68000 CPU boards that (initially) did just that. The memory parts they used were expensive for the time and small, but they were fast, allowed DTACK to be grounded, and allowed the overall design of these CPU boards to be very simplistic and inexpensive. For a while, these boards were most likely the most accessible path to a 16/32-bit microprocessor like the 68000.

What was also interesting was the way that these boards were used. They were sold as attached processors for Commodore PET's and Apple ][ machines. The software would then patch the internal 8-bit BASIC implementation to delegate math operations to the attached processor. Believe it or not, the speed improvement offered by the 68000 was significant enough to offset all of the other complexity around this implementation choice. The net was an accelerated and mostly compatible BASIC.

Later in the newsletter, the author talks about pairing an Intel 8087 with a 68000 to get better floating point. (The 8087 was a remarkable chip for the time.) The 8086 that was needed to run the 8087 is referred to as a 'clock generator'. I guess the net architecture here was to be a 6502 Host CPU, connected to a 68000 attached processor using an 8086 and attached 8087 to accelerate floating point.

Meanwhile, PC clones had sockets for 8087 chips, Apple was releasing relatively inexpensive 68000 hardware, and the 80386 was well on the way. The writing was on the wall for the DTACK grounded approach to accelerating 8-bit microcomputers, but it must have been interesting while it lasted.

  • cmrdporcupine a day ago

    Yeah the era of "memory can outrun the CPU" was brief and glorious. The approach 80s microcomputers used for graphics required it -- multiplexing the video between the VDP (C64 VIC-II, Atari ST Shifter, etc.) and the CPU on odd bus cycles. Nice and fun.

    By the end of the decade the CPU was running 2-3x the speed of the fastest RAM.

    Now things are soooo complicated.

    Not sure about this alternate reality where Apple's 68000 machines were cheap :-) (I say this as an Atari ST owner).

    68000 has kind of aged well despite not being made anymore -- is perhaps now the only "retro" architecture which can be targeted by a full modern compiler. You can compile Rust, C++20, whatever, and have it run on a machine from 1981. That's kinda cool.

    • fredoralive a day ago

      Well, compared to the first wave of 68000 machines, which were generally high end workstations from the likes of Sun and Apollo, a $2500 Macintosh is cheap. Apples belief in this whole “profit margin” thing did mean it couldn’t compete on price with the Amiga and ST though…

      • cmrdporcupine a day ago

        I mostly jest. In the late early 90s the prices of 68k Macs actually dropped into the very affordable range. The II series were great machines, priced well, stable, etc. The shift to PowerPC ruined the classic Mac, IMO.

        In that era I had a 486/50 running (early) Linux and my mother had a Mac LC II. I actually really enjoyed using that machine.

        • markus_zhang a day ago

          Just curious why do you think the shift to PPC ruined the classic Mac? I never owned a Mac before but I did buy an iBook G4 because I somehow got fascinated by the PPC machines.

          • cmrdporcupine 19 hours ago

            The PPC architecture is fine enough. The problem was their "operating system" was written as a 68k OS with no memory protection and a weird memory model generally, and for almost a decade they ran with 68k emulation in order to make it all work.

            And it crashed constantly. Very unreliable machines.

            They did crash here and there in the 68k days, but overall they worked pretty good. Albeit cooperative multitasking, etc.

            But in the mid-90s, with System 7.6, it was like walking through landmines. e.g. I helped admin an office with a bunch of them and you couldn't run Netscape and FileMaker at the same time because they just wrote all over each other's memory and puked.

            System 8 and 9 improved things markedly but the reputation was still there.

            Meanwhile they had these grandiose OS rewrite projects that all failed until they ended up buying NeXT... and then spent 5 years turning NeXTstep into OS X.

            In retrospect Apple could have skipped the whole PPC era and done much better for themselves by just switching to x86 (and then ARM as they've done now) after a brief foray through ColdFire.

            Or just jumped straight to ARM instead -- they were an ARM pioneer with the Newton! -- rather than betting the farm on the IBM/Motorola PowerPC alliance, which ultimately ended falling badly with power hungry chips that couldn't keep up with x86.

            • markus_zhang 16 hours ago

              Thanks for sharing. I never used one before so don't know how good/bad it was. My iBook runs OS X so it is pretty good.

              It's a bit embarrassing as the 68k emulation was part of the reason that I got fascinated. But I just want to learn binary translation, not really use them, anyway.

              I think Apple in the early 90s threw things on the wall and hope something stuck. Bad for consumers, nightmare for admins but good for engineers who managed to make the throw.

              • cmrdporcupine 14 hours ago

                Early 90s Apple was a bit like Google today, maybe. Big and ineffective at actually delivering, but with a history of innovation and illustrious past and a lot of smart people working there.

                The problem with PowerPC was Motorola folded and IBM didn't have any real long term interest in the consumer PC CPU market.

                So they just fell further and further behind.

                • markus_zhang 14 hours ago

                  Interesting. I wonder if their interview standard fell during that period (because many engineers may leave or refuse to join a dying company). Same for Google in the near future.

    • mschaef a day ago

      > Yeah the era of "memory can outrun the CPU" was brief and glorious.

      I don't think I fully recognized at first what was happening when wait states, page mode DRAM, and caches started appearing in mainstream computers. :-)

      > Not sure about this alternate reality where Apple's 68000 machines were cheap :-) (I say this as an Atari ST owner).

      Yeah... I should have cast a broader net. The Atari ST machines were much better deals IIRC. In any event, DTACK grounded PoV was that the 68000 was targeted at minicomputer scale machines, so anything that fit on a desk at all was arguably going to be inexpensive. (Years later, I did embedded work on 68K class machines intended to run in low power environments. They had to be "intrinsically safe" in potentially flammable industrial control environments. That architecture had a long path from 'minicomputer class' to where either wound up.)

      The other thread this reminds me of is a bit later, Definicon was selling boards like the DSI-780. These were PC AT boards with an onboard 68020/68881 and local memory. Computationally intensive jobs could be offloaded to that board, which was supposedly like a VAX-11/780 on your desk. In some ways, it served a similar role to the DTACK attached processors, but at a slightly later point in time.

      Like the DTACK grounded products, the window of time in which these products had value was oh so short, relatively speaking.

asdefghyk a day ago

Images of the DTACK Grounded board and the Apple II IF card. https://au.pinterest.com/pin/294774738111346090/

My recollection is, it was advertised in Byte magazine. Board cost about $600. I was interested in buying one, at the time about 1981, but never did, because it was always a "bit expensive"....

  • markus_zhang a day ago

    Man $600 was probably a lot of dough in 1981...but again PC in that era was an expensive hobby. I was born in early 80s in China and was very lucky to have a PC around the age of 6 -- university "lend" my father one 8086 PC for Mathematics work that he used to develop something similar to Latex but for Chinese.

asdefghyk a day ago

My recollection was there was one or more 68000 boards sold with pin DTACK actually grounded.

cmrdporcupine a day ago

Classic.

Ran the DTACK BASIC on my Atari ST for a bit. Fast as hell, no nonsense.

Unfortunately too isolated from the rest of the ecosystem (was its own OS, could not read or write standard TOS or PC-DOS formatted floppies)