mkl 3 days ago
  • unyttigfjelltol 3 days ago

    This should be the article link. As a non-X user the message thread in the original post doesn't work.

    • numpad0 3 days ago

      It's insane how Twitter of late is weaponizing "doesn't exist" and "cannot display" messages as loginwall and impressions control. It won't reproduce at the side of first class citizens with industrialized means of buzzing, so posters won't notice until subhuman class gather and speak up outside of the system. Crazy stuff.

      • galdosdi 3 days ago

        Yeah, it's so disrespectful, typical of Elon's whole personality.

        For example, if you try to load Twitter in Firefox incognito, it will say "Error" and then lower say "Firefox incognito is known to cause issues with X.com" as if to move the blame away from themselves, when in actuality they are specifically detecting and blocking browsers in order to deter privacy.

    • jxub 3 days ago

      Apologies - it didn’t cross my mind!

basilgohar 3 days ago

I love this insider view into this interesting point in computing history, especially about AMD. However, I was a little put off by the glorification of nVidia's shady practices and lock-in policies as key to their current leading position. While technically true, I dislike "ends justify the means"-style thinking.

All this as the OP glorifies AMD's engineering and grit-based culture to drive through all though tough missteps and missed opportunities.

To expand on that, I really do feel AMD has great engineering culture but they keep falling to the same traps. They do not invest strongly enough in software support nor vendor relationships. Neither of these necessitate the more evil monopolistic practices of vendor lock-in and proprietary, non-free (as in libre) software. If they can navigate that without turning evil, they'd be a company for the ages.

And I can't close with mad respect to Dr. Lisa Su for her admirable leadership, itself bookworthy. Also, quick fact, she and Jensen are cousins!

  • gymbeaux 3 days ago

    On the other hand, AMD was on the brink of bankruptcy and Lisa Su led them out of it and into a triple-digit share price. Most companies with that much debt and that little revenue would have gone bankrupt.

    Lest we forget the Intel IPC advantages over comparable AMD CPUs was due to some shortcuts that exposed major vulnerabilities in Intel CPUs made from ~2011 to 2019. I’d be curious to see how a Spectre and Meltdown-patched Intel CPU fares against its AMD competitor NOW. Some of the performance hits were brutal- 20%+ in some workloads.

    Nvidia was pushing AMD out of the GPU market back when GPUs were effectively only used for gaming and while GameWorks was predatory, you can’t really blame them for having the cooler-running, quieter, more energy-efficient GPUs going back to the Maxwell line (GTX 9x0). CUDA didn’t screw AMD until recently… but in 2014, people were picking Nvidia because the GPUs were considerably “better”. AMD had the best bang for buck back then, but you’d have more power consumption and heat output, and the drivers tended to be buggy. The bugs would be fixed, but it really sucked for people trying to play games on release day.

  • woooooo 3 days ago

    Nvidia was pushing CUDA forward for over a decade before it started getting serious commercial traction. It's not like they blocked anyone else from developing viable GPGPU tech, they were just the only ones pushing it.

    For like 8 years their drivers on Linux were a nightmare and AMD could have come in and done better.

    • shmerl 3 days ago

      > For like 8 years their drivers on Linux were a nightmare and AMD could have come in and done better.

      AMD eventually did while Nvidia's drivers remained a nightmare almost until these days. But sure, AMD could have done it sooner.

      • paulmd 3 days ago

        > AMD eventually did while Nvidia's drivers remained a nightmare

        and yet that trillion-dollar valuation built over the last decade is built with customers almost entirely running on those "nightmare" linux drivers, while AMD's linux drivers crash running the sample app on supported hardware+OS, and nobody at AMD cared until finally a tech-bro with a loud enough platform shamed them into fixing it...

        ... and this is something like AMD's third crack at the apple, and the first three sets of drivers (one of which is literally a Vulkan-branded spec) are just as non-functional today as rocm was a year ago.

        (OpenCL, Fusion HSA/AMD APP, Vulkan Compute/SPIR-V... all still broken so badly that Octane called them out for being unable to successfully compile their renderer and for lack of vendor support, so badly that Blender pulled support after years of turbulent and poorly-performing attempts to work with AMD, etc)

        • shmerl 3 days ago

          Nvidia only cares about a specific market. I.e. it doesn't care about desktop users. That's what I was talking about. So despite their pools of cash, Nvidia is a trash company when it comes to Linux support.

          • wmf 3 days ago

            AFAIK a lot of Hollywood visual effects are done on Linux + Nvidia so they probably support that market.

            • shmerl 3 days ago

              Not really familiar with it, but hopefully they can get unstuck from Nvidia, especially on Linux. Only very recently things started improving it seems and not even with Nvidia's effort but outside community working on nova + nvk.

              • p_l 2 days ago

                nvidia linux drivers efforts were pretty much always driven by that market, it's not getting unstuck because it's supported well.

                Hell, nvidia drivers might be often complained about, but for years I would take nvidia because the crappiness was manageable and close to nothing if you were in the target market (desktop workstations running X11 on only nvidia GPUs? The only issue was if you were running super latest kernel).

                • shmerl 2 days ago

                  Half baked support that fixes major issues at the rate of once per decade can hardly be called well. I would stay away from such garbage when possible.

                  Now that tools like Blender and the like are increasingly picking up Vulkan support, there is no reason for the above to use Nvidia anymore.

                  • p_l 5 hours ago

                    For the targeted use case, their drivers tended to work just fine.

                    It was when you went outside said use case that things started getting worse, and you had to wait long time for fixes. Sometimes it was because the changes in XFree/X.Org were effectively fixated on how some other vendors did things (cough intel cough), or involved things that effectively nobody wanted to spend engineering to fix properly (like rebuilding rendering path to be able to handle hybrid graphics properly when hybrid graphics came into world years after critical set of X.Org devs decided to stop any real development into X.Org...).

                    Vulkan Compute also is nowhere close to feature parity with CUDA, so not sure it would be picked up instead.

                    • shmerl 33 minutes ago

                      Well, Wayland support was a sore point for Nvidia for a very long time. And it's the prime example of why they don't see Linux as their normal support target. Otherwise they would have worked with upstream a long time ago, instead of just trying to start now.

    • CoolGuySteve 3 days ago

      AMD and Apple tried to push OpenCL but the design of it, a C-like kernel compiled to the GPU with LLVM and managed by the Khronos consortium, tended to lag in absolute performance to CUDA which was able to take advantage of evolutions in GPU design more closely.

      Nowadays almost nobody cares about OpenCL.

      • jjoonathan 3 days ago

        The feature lag wasn't the problem, the bugs were the problem: the only reliable OpenCL implementation was the one from Nvidia, but this meant it tended to drive people towards Nvidia rather than steal them away.

      • pjmlp 3 days ago

        Also apparently the reason behind Apple's cut with Khronos seems to be related to how OpenCL was managed by them.

        • talldayo 3 days ago

          "Hey Khronos, can we tweak the OpenCL spec to be even more restrictive and higher-level, then rebrand it under our proprietary 'Metal' architecture so we can license it out to our competitors?"

          "...no, but you could expand on OpenCL or Vulkan compute if you wanted. There are other spec stakeholders, we can't give you carte-blanche control, Apple."

          "Why do you insist upon mismanaging the industry's APIs? Screw you guys!" <Beginning of mid 2010s "Khronos Drought" at Apple Computers>

    • luyu_wu 3 days ago

      The obvious issue with both your points is that NVidia's competitors did do as such. AMD has had workable Linux drivers for many years now and there were numerous alternatives to CUDA pushed.

      • AlexandrB 3 days ago

        A common talking point is that CUDA is a formidable moat for Nvidia, but - as someone who has never done AI dev - I'm curious to understand what makes CUDA so sticky. From an outsider perspective it looks like a re-run of DirectX vs. everything else but AI is not like gaming and end users often don't have to run the model themselves. So it seems like the network effects should be less than that for a graphics APIs.

        • badsectoracula 3 days ago

          I don't know how it is nowadays but i remember trying CUDA back when GeForce GTX 280 was still a high end GPU. I didn't do anything fancy, i just tried to write a simple raytracer to get a feel of how it'd work.

          The experience was incredibly simple: write C like usual but annotate a few C functions with some extra keywords and compile using a custom frontend/preprocessor/whatever-nvcc-was instead of gcc (i was on Linux - and BTW i heavily contest the notion that Nvidia drivers on Linux were "nightmare", they always worked just fine with both performance and features comparable to their Windows counterparts while ATi/AMD had buggy and broken drivers for years). Again, the experience was very simple, i even just copy/pasted a bunch of existing C code i had and it worked.

          Later i tried to use OpenCL which was supposedly the open alternative. That one felt way more primitive and low level, like writing shaders without the shading bits.

          In a way, as you wrote, it was kinda like DirectX: that is, CUDA was like using OpenGL 1.1 with its convenient and straightforward C API and OpenCL was like using DirectX 3 with its COM infested execute buffer nonsense.

          After that i never really used CUDA (or OpenCL for that matter) but it gave me the impression that Nvidia did put way more effort on developer experience.

        • disgruntledphd2 3 days ago

          Nvidia have invested a lot in CUDA, and they have C & Fortran bindings for a lot of scientific stuff, apart from all the DL/Gen AI stuff that's super hot right now.

          Like, I started using CUDA (through frameworks) over ten years ago, and basically nobody has come up with anything competitive since then.

          • kkielhofner 3 days ago

            > Nvidia have invested a lot in CUDA,

            This is a significant understatement. For quite some time Jensen has been saying repeatedly that 30% of their R&D spend is on software. With the money-printing machine that is Nvidia if that holds they're going to continue to rocket ahead of competitors in terms of delivering actual solutions.

            The "What are you talking about? AMD/Intel runs torch just fine!" crowd clearly haven't seen things like RIVA, Deepstream, Nemo, Triton Inference Server/NIM, etc. Meanwhile AMD (ROCm) still struggles with flash attention...

            What these hardware-first (only?) companies like AMD don't seem to understand is that people buy solutions, not GPUs. It just so happens that GPUs are the best way to run these kinds of workloads but if you don't have a wholistic and exhaustive overall ecosystem you end up in single digit market share vs Nvidia at ~90%.

            • mistrial9 3 days ago

              chicken and egg arguments.. good points and not untrue, but look elsewhere in this topic and see extensive anti-trust behavior, questionable license practices, deceptive public statements and deceptive handling of binary blobs. Very much like Intel - excellent tech in certain places, very mob-like business behavior in other places.

              "What are you talking about? AMD/Intel runs torch just fine!" refers indirectly to the value of having competition in markets, not jump on the (well-funded,slick) monopoly bandwagon.

        • pjmlp 3 days ago

          Tooling.

          Since CUDA 3.0, NVidia has embraced a polyglot stack, with C, C++ and Fortran at the center, and PTX for anyone else.

          Followed by changing CUDA memory model to map that of C++11.

          Khronos never cared for Fortran, and only designed SPIR, when it became obvious they were too late to the party.

          So not only has CUDA first level tooling for C, C++, Fortran, with IDE integration in Visual Studio and Eclipse, graphical GPU debugger with all the goodies of a modern debugger, it also welcomes any compiler toolchain that wants to target PTX.

          Java, Haskell, .NET, Julia, Python JITs, .... there are plenty to chose from, without going through "compile to OpenCL C99" alternative.

          Finally, the myriad of libraries to chose from.

          CUDA is not only for AI, by the way.

        • p_l 2 days ago

          The real moat of CUDA is that CUDA... works. Simply works out of the box, even on cheapest GPUs. Unless you want some specific high end stuff, everything will work on the cheapest GPU of given generation, with the same base tooling.

          And because of that, their OpenCL implementation also works better than others. So there's more tooling not just from nvidia using it, because it. just. works.

          Compare this with AMD, whose latest framework is a total mess of "will it work on this GPU?", sometimes needing custom wrangling to enable, etc. etc. and it's effectively supported only on the most expensive compute-only cards.

        • deredede 3 days ago

          The difference is not just about APIs; CUDA has a single source file model that is dead easy to use whereas last I checked every competitor still had an outdated manual loading process that adds significant friction.

          • zozbot234 3 days ago

            Doesn't SYCL also allow for a single-source-file model these days?

            • deredede 3 days ago

              It is supposed to, yes. I was never able to set it up (admittedly I have not tried in a couple of years since I am not working with GPUs anymore) so I don't know how well it holds up.

  • belter 3 days ago

    On the GPU area AMD lost, and will continue to lose to Nvidia, because they don't seem to get a grip on Software and Drivers. And that does not bode well for their long time CEO.

    • latchkey 3 days ago
      • belter 3 days ago

        Just the first link review you posted reinforces my argument:

        "...But we must now talk about the elephant in the room, and that is AMD’s software stack. While it is absolutely night and day from where it was when we tested MI210, ROCm is nowhere near where it needs to be to truly compete with CUDA..."

        • latchkey 3 days ago

          You're pointing at the sun and saying "see, it is bright!". Nobody is pretending that AMD does not need to fix their software stack.

          AMD did not really turn their attention to AI until about Oct of last year. Now that they have, it will take a bit of time to correct the course of the ship, but I know for certain that it is all hands on deck at this point. One sign of this is that we're seeing more frequent and substantial "night and day" improvements to ROCm.

          The lifecycle of hardware, is years. MI300x is a substantial leap. MI325x is another one. The rest of the hardware roadmap (years out), is extremely impressive. Software is a much shorter lifecycle and can be iterated on more easily. Expect to continue to see improvements here over the coming years.

    • diamond559 3 days ago

      The more this is blindly repeated the more you know it's bs

  • jjoonathan 3 days ago

    She turned the company around and got it on the right path, but in interviews I get the feeling that she might also be responsible for the "Hardware 1st, 2nd, 3rd, 4th.... eh, maybe software can be 5th" culture and AMD's deep denial that it has a problem.

    https://news.ycombinator.com/item?id=40790924

    That was OK for the CPU turnaround, but on the GPU front it completely shut them out of the first rounds of the AI party and maybe a trillion in market cap.

    • bavell 3 days ago

      I'm hopeful and optimistic for AMD but if anything were to make me bearish on their prospects, it'd be this.

  • doix 3 days ago

    Yeah, I really feel like AMD is struggling with the software aspect. Even back when they were ATI and AMD bought them, the ATI drivers were garbage compared to Nvidia (from my PC gaming experience). After a few AMD and ATI cards, I just accepted the Nvidia tax, where my cards are more expensive and on paper worse, but in practice worked better.

    I'm really surprised AMD isn't throwing a whole bunch of money on emulating CUDA. If they could "just" make CUDA work on AMD cards, it feels like Nvidia's position would be severely weakened.

    Kind of like how Valve invested heavily into Proton and now gaming on Linux is pretty much fine.

    • luyu_wu 3 days ago

      I'm not sure emulating CUDA would be legal, you can look at ZLUDA as an example. It was originally funded by AMD, but got cut for what I presume would be legal reasons. ZLUDA does work amazingly well though from my experience!

    • mook 3 days ago

      AMD also doesn't understand that CUDA got big because they worked on cheap consumer cards; once things were working people got interested in expensive specialized cards. Their stack is still focused on the high end only, but there's no ecosystem to support it.

      • rcleveng 3 days ago

        To me, this is the most important point and what AMD is missing out from their current strategy. I can take an off the shelf, easy to get 4070 or 4080 and use it with CUDA to learn.

        AMD's strategy for people wanting to learn, is basically no strategy.

        It's always been the software holding them back, still is, need to invest in the ecosystem and not just the things easy to justify as a revenue driver.

    • pjmlp 3 days ago

      That is what ROMc and HIP were supposed to be somehow, but even that isn't really CUDA, as in the polyglot programming language environment, with C, C++ and Fortran first, plus others, followed by Python JIT, libraries, IDE, and a GPU graphical debugger.

  • ksec 2 days ago

    >However, I was a little put off by the glorification of nVidia's shady practices and lock-in policies as key to their current leading position.

    What was their shady practices and lock-in policies?

  • DEADMINCE 3 days ago

    > However, I was a little put off by the glorification of nVidia's shady practices and lock-in policies as key to their current leading position. While technically true, I dislike "ends justify the means"-style thinking.

    Personally, I have no issue with "ends justify the means"-style thinking as a blanket rule, often it's perfectly appropriate.

    I would argue it is, in this case, where Nvidia was playing a game by the rules. If there is an issue with how they played, then government should change the rules.

    The people in power in the US don't want that though.

AlexandrB 3 days ago

> SUPERIOR PRODUCTS LOSE TO SUPERIOR DISTRIBUTION LOCK-INS & GTM.

This takeaway was a little odd to me in the context of 2008. I had been an AMD stalwart in my PCs since about 2000 (Athlon Thunderbird), but IIRC in 2008 Intel had the better processor. Better single core performance, better performance/watt, and I think AMD processors tended to have stability issues around this time. I remember I built a PC in 2009 with a Core processor for these reasons.

Obviously this is a niche market (gaming PC) perspective. But I don't think it was so clear cut.

  • BirAdam 3 days ago

    Until the later Core 2 Quad CPUs, AMD’s stuff was “technologically” better in the multi core workloads. The problem with that is that multi core workloads were uncommon at the time. This is where the “AMD Fine Wine” meme originated. By the time people had moved on to better things, the greatness of AMD’s technologies became apparent.

    Personally, I’ve always liked Intel for stability reasons. Running Intel chipsets and CPUs, I’ve just had fewer issues. I’m an enthusiast, so I do spend more than I should on both Intel and AMD rather frequently… but now, I’m hungry for an Ampere system. My wallet is crying.

  • tails4e 3 days ago

    Agree. It took a truly superior product at lower cost to make a dent in Intel's dominance in server, all the while Intel tried their best to flex their lock in muscle.

    That happened well after 2008, with the advent of Zen and chiplet bases tech and better perf/W

    • treprinum 3 days ago

      Ryzen 1 was far from superior, performance-wise it was already behind Intel at around Haswell-level but it brought the first reasonably-priced octacore x64 for the masses.

difosfor 3 days ago

> I seriously wish Nvidia and AMD could merge now – a technology cross-licensing that takes advantages of each other’s fab capabilities is going to help a lot in bringing the cost of GPU cycles down much furthe.

Given Nvidia's track record I'd sooner imagine them just slacking off and overcharging more for lack of competition. I wish AMD would actually compete with them on GPUs (for graphics, not AI). Interestingly Intel seems to be trying to work up to that now.

  • paulmd 3 days ago

    the reason NVIDIA has a lead right now is largely because they didn't slack off during the Maxwell era and kept iterating even after the 22nm/20nm node fell though. AMD decided that hey, we can't really afford this right now, and NVIDIA can't shrink either, right? But NVIDIA slipped in a major architectural iteration that was basically a full node worth of efficiency gains, and that really has put them in the position they are today.

    Being able to take a trailing-node strategy during the Turing/Ampere years, being able to run a full node behind RDNA1/2 and use dirt-cheap Samsung crap and last-gen TSMC 16FF/14FFN while still fighting AMD to a standstill on efficiency is entirely the result of AMD slacking off.

    AMD themselves have said they slacked off. Lost focus, is the quote.

    https://www.youtube.com/watch?t=1956&v=590h3XIUfHg

gpderetta 3 days ago

> We did launch a “true” dual core, but nobody cared. By then Intel’s “fake” dual core already had AR/PR love.

Practicality beats purity 100% of the time. This echoes "Worse is better".

btouellette 3 days ago

Is he really trying to say that AMD had a superior product in the Core 2 Duo era and Intel was only dominating due to marketing? It's hard to take any of the rest of his opinions seriously when he starts with that take

  • luyu_wu 3 days ago

    I'm not sure if you're familiar with CPU history, but this is roughly true. Intel's catchup to multicore offerings was trippy and severely lagged behind AMD. I think it's often forgotten that CPU leadership has fluctuated between different companies many times in the past!

    • btouellette 3 days ago

      I'm quite familiar as I worked for Intel for over a decade as an engineer. It's absolutely true that leadership has fluctuated a lot but the 2003-2010 era had fairly clear cut leaders for each generation. AMD was the choice for just about everything through the Athlon 64 single core era but the Core 2 Duo run had them relegated to superiority in the very bottom end of the market only for a long time.

      https://www.anandtech.com/print/2045/

      • adrian_b 3 days ago

        Core 2 as an individual core was significantly better than AMD's competing core (e.g. by being able to issue 4 simultaneous instructions vs. 3 instructions for AMD).

        Nevertheless, the integration of multiple cores into an Intel multiprocessor was very inefficient before Nehalem (i.e. the cores were competing for a shared bus, which prevented them from ever reaching their maximum aggregate throughput, unlike in the AMD multiprocessors, which had inherited the DEC Alpha structure, with separate memory links and peripheral interfaces and with an interconnection network between cores, like all CPUs use now).

        However this was noticeable at that time mostly in the server CPUs and much less in the consumer CPUs, as there were few multithreaded applications.

        Core 2 still lagged behind AMD's cores for various less mainstream applications, like computations with big integers.

        Only 2 generations later, after Core 2 and Penryn, with Nehalem (the first SKU at the end of 2008, but the important SKUs in 2009) Intel has become able to either match or exceed AMD's cores in all applications.

      • highfrequency 3 days ago

        Thanks for the color! From the article you linked, it looks like the Twitter thread is quite misleading in claiming that Intel simply slapped two cores together to achieve superiority over AMD. Your article notes a big process improvement (65nm vs 90nm) which allowed for 2x the transistors on a smaller die size along with faster clock and lower memory latency. Curious to get your take.

        • adrian_b 3 days ago

          Intel's 90 nm CMOS process was a disaster, at least in its variant for desktop or server CPUs, all of which had an unbelievably high leakage power consumption (the idle power consumption of a desktop could be more than a half of its peak power consumption).

          On the other hand, AMD's 90 nm CMOS process has been excellent.

          With its 65 nm process, Intel has recovered its technological leadership, but that was not the most important factor of success, because AMD's 65 nm process was also OK and it became available within a few months of Intel's process.

          AMD has lost because they did not execute well the design process for their "Barcelona" new generation of CPUs (made also in 65 nm, like Core 2). While Intel has succeeded to deliver Core 2 even earlier than their normal cadence for new CPU generations, AMD has launched Barcelona only after several months of delays and even then it was buggy. The bugs required microcode workarounds that made Barcelona slow in comparison with Core 2, and that started the decline of AMD, after a few years of huge superiority over Intel.

      • creativeSlumber 3 days ago

        Is it possible that maybe your view point is biased against AMD since you worked for for Intel during that time?

        • btouellette 3 days ago

          The benchmarks for all these CPUs that my personal view point is based on are all out there. Anandtech was my favorite source for this at the time due to relatively detailed testing and a clear understanding of the implications of architecture decisions. The complete history of their contemporaneous reviews are still online and userbenchmark.com has independent data on these older CPUs as well although obviously with less control over potential mitigating factors.

          AMD was struggling to release CPUs that were competitive against year old Intel Core 2 Duos which remained the status quo through their Bulldozer architecture. Things started turning around with Ryzen when a combination of architecture improvements and typical workloads taking more advantage of multicore flipped the script.

          The bits about "true" multicore are also sketchy considering Bulldozer was using shared L2, fetch/decode, and floating point hardware on each module and calling a module two "cores" for marketing purposes.

          https://www.anandtech.com/show/4955/the-bulldozer-review-amd...

        • keyringlight 3 days ago

          K7/K8 were great, and while the follow-on K10 Athlon2/Phenom/etc were definitely not bad, they weren't great and they were competing against Conroe/Core2 onwards. That kind of tag-team trading places highlights how (mostly) good the CPU market is now, both AMD and intel are putting out some really nice products with variety so you can pick the most suitable for you, but there's no default "just pick [company]"

        • wmf 3 days ago

          Nah, btouellette is correct. AMD only led for a few years around 2003-2005.

          • Delk 3 days ago

            AMD did become at least competitive in high end CPUs with the original Athlon or Athlon XP. Not sure whether they were faster than the Pentium 3 but they weren't trailing.

            So perhaps a bit more than a couple of years, but my impression is also that they fell behind on (single-thread) performance for a long time after that.

            I've also understood that in more ancient history AMD CPUs sometimes beat contemporary Intel parts in performance, although releasing their parts later than Intel. I'm not sure that's relevant to any remotely recent developments anymore though.

    • alphabeta2024 3 days ago

      The OP is right. Pentium D was a single generation in which Intel offering was worse that Athlon 64 X2 . But Intel quickly shifted to Core 2 Duo architecture and it was much better than AMD.

      • adrian_b 3 days ago

        Since the introduction of Opteron at the beginning of 2003 until the introduction of Core 2 at the middle of 2006, the AMD CPUs were vastly superior to any kind of Pentium 4, not only to Pentium D.

        This was much more obvious in servers or workstations than in consumer devices, because the kinds of applications run by non-professionals at that time were much more sensitive to the high burst speeds offered by Pentium 4 with very high clock frequencies, than to the higher sustained performance of the AMD CPUs.

        In 2005, I had both a 3.2 GHz Pentium 4 (Northwood, 130 nm) and a 3.0 GHz Pentium D (Prescott, 90 nm). With any of them, the compilation from sources of a complete Linux distribution took almost 3 days of continuous work of 24 hours per day.

        After I bought an Athlon X2 of only 2.2 GHz, the time for performing the same task has been reduced to much less than a day. Even for some single-threaded tasks, but which contained many operations that were inefficient on Pentium 4, like integer multiplications or certain kinds of floating-point operations, the 2.2 GHz AMD CPU was several times faster than the 3.2 GHz Pentium 4.

        At work, the domination of the AMD CPUs was even greater. Each server with Opteron CPUs that we bought was faster than several big racks with Sun or Fujitsu servers that were used before. Intel did not have anything remotely competitive. At the beginning of 2006, on my laptop with an AMD Turion I could run professional EDA/CAD programs much faster than on the big Sun servers normally used for such tasks. Intel had nothing like that (i.e. the 32-bit Intel CPUs could not use enough memory to even run such programs, so the question whether they could have run such programs fast enough was irrelevant).

        Of course, half of year later the competition between Intel and AMD looked completely different.

tambourine_man 3 days ago

I never worked at a large company and he was right there, but there are so many outstanding things in this thread, it’s hard not be surprised.

Not understanding the importance of GPUs in 2006, or of being first-to-market, while confusing OpenGL with OpenCL (twice), survival bias (BELIEVE IN YOUR VISION)…

andruby 3 days ago

It's unbelievable that INTC market cap is only 133B, AMD is only 274B and NVDA is 3,130B. That's 23x INTC and 11x AMD.

  • drexlspivey 3 days ago

    On the latest quarter, NVDA’s net profit was 3 times AMDs revenue

alberth 3 days ago

> I spent 6+yrs @ AMD engg in mid to late 2000s helping design the CPU/APU/GPUs that we see today.

Is that a far statement to make, given ~20-years has passed?

  • stagger87 3 days ago

    No, and the content of the tweets certainly dont support his claim. As someone else pointed out he's already tweeted this "story" in the past. It's all low effort self promotion for his brand.

  • JonChesterfield 3 days ago

    It seems a bit dubious since there's the trainwreck of Bulldozer on x64 sitting between then and the ryzen cores that managed to dethrone intel. I'm pretty sure the GPUs were VLIW without direct access to system memory too. I think one could sketch a lineage from the PS4 processor through to el capitan with a bunch of handwaving but that also seems to be after that time period.

  • 1oooqooq a day ago

    Hes not an engineer anymore. that's MBA math. it checks out.

Zambyte 3 days ago

> I seriously wish Nvidia and AMD could merge now – a technology cross-licensing that takes advantages of each other’s fab capabilities is going to help a lot in bringing the cost of GPU cycles down much further!

It's interesting that they see such a monopoly as something that would bring costs down. It seems more to me like competing with AMD does much more to keep Nvidias costs down (if they can be described as "down") than combining resources would.

lotsofpulp 3 days ago

> a technology cross-licensing that takes advantages of each other’s fab capabilities is going to help a lot in bringing the cost of GPU cycles down much further!

What does this mean? I thought neither have any “fab” (manufacturing) facilities.

modeless 3 days ago

> In fact, AMD almost bought Nvidia but

Imagine the wealth destruction if they had merged way back then! I don't love the way mergers are regulated today but I do feel like preventing companies from growing too big through mergers is desirable.

nickpeterson 3 days ago

People keep yelling about nvidia stock but that feels like a huge bubble. AI disillusionment will hit and the stock will implode. Nvidia hasn’t made any inroads on producing actual systems, just gpus. Once Apple or Microsoft have a fast enough chip (TOPs wise), nobody will care about nvidia lead except in the datacenter. Seems like a failing position to me.

  • eitally 3 days ago

    I think I'm not adequately understanding your comment. Nvidia has huge software teams building accelerators that optimize application of their GPUs for all kinds of applications. They also now offer their own "cloud" and have partnered with the hyperscalers for integration as well as GTM. Those same hyperscalers are also almost unilaterally driving Nvidia's growth the past few years. The cloud market is still growing and will continue to do so. Nvidia's consumer business almost doesn't matter anymore.

    • bob1029 3 days ago

      I think much of this boils down to perspectives some of us have regarding the value of in-house manufacturing capabilities relative to design time & software capabilities.

      I get arguments that maybe one fab is better than the other, but what about all of them combined? All of our modern chipmaking capability all at once.

      Nvidia has no factories. You can ship their output on a USB flash drive. Valuation: ~3.1T.

      Intel, TSMC and Samsung have all the factories. Every modern chip made on earth in this circle. Combined valuation: ~1.1T

      This is simple napkin math for this arbitrary retail investor. I don't know when the music will stop but it absolutely will.

      • wormlord 3 days ago

        I agree with this sentiment overall, but we have to remember that for valuations, profit and growth is really all that matters. Even if an industry is irreplacable.

        I think I read this from Warren Buffett, but basically as of the early 2000s the airlines, in their entire history, had only managed to break even. If you had bought an airline company in 1940 and held it until 2000, you would have never profited from it. The business itself would be worth significantly more, but your only exit strategy would be to sell.

        I haven't looked at any of these company's balance sheets, but it might be that semiconductor fabbing is less profitable and has less room for growth. In the short term that's all that matters. The question is if Nvidia can hold on to its current growth and margins (I don't think it can).

      • matwood 3 days ago

        Are you saying a software company has less value for some reason? See MS. Or are you saying a hardware company that doesn't own their production has less value? See Apple.

        I think we've seen over many years at this point that there is huge value in the final product.

      • _zoltan_ 3 days ago

        hard depreciating assets (factories) + selling chips are way worse than actually selling an ecosystem. NVDA is just getting started :-)

  • CoastalCoder 3 days ago

    I've worked on AI-hardware software stacks for several well known companies.

    It's impossible to overstate to advantages that CUDA, it's documentation, toolchain, and nSight software provide to outside developers.

    The closest thing I've seen to nSight Systems software is Intel's VTune. But that's just one piece I'm a much larger puzzle, and last I checked, VTune was only for Intel CPU.

    AFAICT, Nvidia's software seriously reduces the ramp-up time for new developers to write kernels or apps that make good use of the available hardware.

    E.g., nsys-ui (like VTune) recognizes anomalous profile results, and makes solid suggestions for next steps. I don't know of other software that does this (well), although maybe I'm just uninformed.

  • Jlagreen 3 days ago

    This is wrong, please check what DGX is.

    DGX is a complete data center from Nvidia where Nvidia is the supplier of everything themselves:

    - CPU+GPU from Nvidia - Rack from Nvidia - Interconnects + networking from Nvidia - SW from OS to application framework from Nvidia

    The only thing Nvidia really needs partners with DGX is memory (RAM + SSD).

    One reason Nvidia's margins are so high is because they provide the whole data center so while competition has to split margins (AMD/Intel + SMCI/DELL + Broadcom/Arista + Cray/HPE).

  • redleader55 3 days ago

    Nvidia doesn't only produce GPUs. They have their hands in all sorts of interesting technologies from high speed, performance NICs to switches. The GPUs in the data centers don't just function as an individual unit, but they are able to use the "backend" NICs of the server, in conjunction with their proprietary NCCL library to send data with as close as possible to "zero-copy" from GPU to GPU with very good horizontal scalability. As you can imagine the network fabric behind this is quite important, and Nvidia having one the best Infiniband switches helps keep this monoploy.

    There are many companies working on alternatives at the moment, but it will be a while until Nvidia can be replaced.

  • htrp 3 days ago

    >Once Apple or Microsoft have a fast enough chip (TOPs wise), nobody will care about nvidia lead except in the datacenter.

    thats gonna take a while

  • imtringued 3 days ago

    Google has powerful TPUs. The real question is, why isn't Meta building their own?

    • wmf 3 days ago

      Meta is on their Nth generation.

theandrewbailey 3 days ago

> We didn’t want a GPU company so much that the internal joke was AMD+ATI=DAMIT.

I remember reading that on places like the Register, but they kept the second A, so DAAMIT.

chollida1 3 days ago

Minor curiosity point.... Does anyone know why engg has two g's here?

I'm sure it mean engineering but i've never seen that abbreviation, he motioned he's from India, is that where this comes from or is it just an individual quirk?

OliverGuy 3 days ago

Why is AMD green on that graph and Nvidia red.......

  • amlib 3 days ago

    AMD used to be green and nVidia... idk, maybe because they are more like dealing with the devil nowadays :)

    • garaetjjte 3 days ago

      AMD was green, ATI red, so... yellow?

fulafel 2 days ago

> We wanted to merge GPU+CPU into an APU but it took years of trust & and process-building to get them to collaborate. Maybe if we had Slack, but we only had MSFT Sharepoint

I wonder how many companies had this problem.

carlsborg 3 days ago

Back in 2015, AMD was trading at $2.40 and Nvidia at about 50 cents (accounting for stock splits). 1000 USD invested then would be ~$70,000 and ~$256,000 respectively today.

  • elzbardico 3 days ago

    Let's be frank. Nobody invests on the market nowadays, the correct verb is BET.

    • lotsofpulp 3 days ago

      I have been reading this comment since early 2000s. If I was old enough, I probably would have heard this comment being made in the late 1990s. As well as the 1980s. And probably before that too.

      • polymatter 3 days ago

        That doesn’t make it less true!

        It’s a bet because it’s risky “capital is at risk”,“value of investments can go down as well as up” etc. As opposed to a savings account which is far far less risky enough that it’s not really a bet.

        • prewett 3 days ago

          Pedantically, any storage of money is a bet, because it could change in value. However, to the Buffett-style investor, you think about whether you want to buy the entire company, even if you can only afford one billionth of it. You look at a reasonable projection of earnings growth--and don't buy companies that are unpredictable (like early stage tech companies). You try to buy at a discount ("margin of safety") in case you are wrong in some fashion. And so forth.

          So for example, Coca-Cola (KO) is pretty predictable. Absent any major blunders by management, KO is going to grow roughly the size of the economy, and it's going to put out 3% a year in dividends. So the fair market price of KO is reasonably determinable, and you wait until you can buy it at or less than it's fair price.

          This is usually contrasted against technical traders, momentum traders, etc., who are not investing in the fundamentals of the business and assuming the price will follow good fundamentals, but rather they are betting on how the price will change.

          So "investing" is seen as buying fundamentals and "betting" (or "gambling") is seen as buying on expected price changes.

        • quesera 3 days ago

          No, it's just wordplay.

          Capital is always at risk in financial investments.

          If there is a semantic difference, I'd say you "invest" when you have a historical expectation of future positive returns, and you "bet" when you're taking a contrarian approach or just going with a gut feeling when data isn't available or known.

          Anecdotally, and personally, I've had better luck with "bets" than "investments". But they're fundamentally the same thing.

        • lotsofpulp 3 days ago

          That use of bet would make it a meaningless comment.

          Presumably, elzbardico’s use of “bet” meant something akin to betting in a casino or lottery, where the goal is to get high from the rush of sudden, big, improbable wins.

    • ant6n 3 days ago

      I INVESTED maybe 5% of my portfolio on AMD at around 2.5$, I think I sold it at 20x or so.

      I should‘ve BET 50% my portfolio.

      • J_Shelby_J 3 days ago

        Same. I wish I held. At the time it was such an obvious play with Zen, but I never for saw it running for a decade of growth.

sublinear 3 days ago

> We were always engineering-led and there was a lot of hubris...

So, long story short is that most engineers, especially ones as fanboyish as this, are wildly out of place in decision making and can't see the forest for the trees?

It doesn't seem that surprising.

  • aleph_minus_one 3 days ago

    > So, long story short is that most engineers, especially ones as fanboyish as this, are wildly out of place in decision making and can't see the forest for the trees?

    My experience is rather that people who are passionate about engineering simply have a very different "taste" in hardware and buying decisions than other groups. So they see the forest insanely well, but they see very different paths through this forest than other people (say analysts or the general population) do.

Apreche 3 days ago

I predicted years ago they would make a CPU and you would be able to buy an All-NVidia PC. I think the reason that hasn't happened is because of the failed purchase of ARM. And looking at the market dominance of NVidia, it seems they were right to block that acquisition.

  • JonChesterfield 3 days ago

    There was an arm CPU from nvidia branded "Tegra" from circa 2010. I remember a laptop (or maybe a mini pc style thing) based on that and power consumption being shameful but my recollections are hazy.

    Vaguely interesting side note, yandex found that from poor search terms very easily and google abjectly failed to. I hope google are tracking how frequently people use their engine to find yandex, while remembering bing being mostly used to find google and maybe the death of yahoo.

    • einsteinx2 2 days ago

      The Nintendo Switch uses a Tegra X1 chip.

  • alphabeta2024 3 days ago

    Supercomputers are now Nvidia-only with Grace Hopper chips.

_zoltan_ 3 days ago

Since we're talking about nvidia... :)

is there anybody here who has access to a B200 NVL72 with working external nvlink switches and wants to share non-marketing impressions?

  • wmf 3 days ago

    Blackwell hasn't been released yet. I'm a little skeptical of the NVL racks since I've seen no success stories so far.

    • _zoltan_ 3 days ago

      I mean there's none shipping (at least at volume as a product) yet, so no success stories makes sense?

      • wmf 3 days ago

        Hopper NVL and SuperPods should be shipping but we've heard nothing.

paulmd 3 days ago

> We did launch a “true” dual core, but nobody cared. By then Intel’s “fake” dual core already had AR/PR love. We then started working on a “true” quad core, but AGAIN, Intel just slapped 2 dual cores together & called it a quad-core. How did we miss that playbook?!

it is wild the way AMD engineers can't stop themselves from throwing stones, even with 20 years of distance and even when their entire product strategy in 2024 now rides on gluing together these cores.

people forget that Intel saying that AMD was gluing together a bunch of cores comes after years of AMD fans whining that Intel was gluing together a bunch of cores - that was always an insult to Intel users that pentium D wasn't a real chip, that core2quad wasn't a real chip (not like quadfather, that's a real quad-core platform!). And you see that play out here, this guy is still salty that Intel was the first to glue together some chips in 2002 or whatever!

and the first time AMD did it, they rightfully took some heat for doing it... especially since Naples was a dreadful product. Rome was a completely different league, Naples really was glued-together garbage in comparison to Rome or to a monolithic chip. You can argue that (like DLSS 1.0) maybe there was a vision or approach there that people were missing, but people were correct that Naples was a dogshit product that suffered from its glued-together nature. Even consumer ryzen was a real mixed bag, vendors basically took one look at naples and decided to give AMD 2 more years to cook. People wedge still so wound into it they sent death threats to GamersNexus for the “i7 in production, i5 in gaming” which frankly was already quite generous given the performance.

frankly I find it very instructive to go back and read through some of the article titles and excerpts on semiaccurate because it just is unthinkable how blindly tribal things were even 10 years ago, but this shit is how people thought 10 years ago. Pentium D is bad, because it's glued-together! Core2Quad is bad because it's glued-together! And that from the actual engineers who have the perspective and the understanding to know what they're looking at and the merits, with 20 years of retrospect and distance! If you instead look at what the discourse of this time was like...

https://www.semiaccurate.com/tag/nvidia/page/6/

"NVIDIA plays games with GM204"

"how much will a GM204 card cost you!?"

"Why mantle API will outlive DX12 [as a private playground for API development outside the need for standardization with MS or Khronos]"

"GP100 shows that NVIDIA is over four years behind AMD in advanced packaging"

"NVIDIA profits are up in a fragile way".

like why are amd people like this? inside the company and out. It’s childish. None of the other brands engineers are out clowning on twitter (frank azor? chris hook? etc), none of the other fans are sending death threats when their brand’s product isn’t good. Like you wanna make a $10 bet over it???