lazypenguin 2 days ago

Linus response here seems relevant to this context: https://lore.kernel.org/rust-for-linux/CAHk-=wgLbz1Bm8QhmJ4d...

  • thesuperbigfrog 2 days ago

    Linus's reply is perfect in tone and hopefully will settle this issue.

    He is forceful in making his points, but respectful in the way he addressed Christoph's concerns.

    This gives me great hope that the Linux maintainer community and contributers using Rust will be able to continue working together, find more common ground, and have more success.

    • j16sdiz a day ago

      The response addressed Christoph's concerns _in word_.

      According to the policy, rust folks should fix the rust binding when C changes breaks the binding. The C maintainer don't need to care rust, at all.

      In practice, though, I would expect this needs lots of coordination. A PR with C only changes that breaks the whole building (because rust binding is broken) is unlikely to be merged to mainline.

      Linus can reiterate his policy, but the issue can't be resolved without some rust developers keep on their persistent work and builds up their reputation.

      • Tomte a day ago

        > rust folks should fix the rust binding when C changes breaks the binding

        I have never understood how that could work long-time. How do you release a kernel, where some parts are broken? Either you wait for Rust people to fix their side or you drop the C changes. Or your users suddenly find their driver doesn‘t work anymore after a kernel update.

        As a preliminary measure when there isn‘t a substantial amount of Rust code, yet, sure. But the fears of some maintainers that the policy will change to "you either learn Rust and fix things or your code can be held up until someone else helps you out" are well-founded, IMO.

        • sanxiyn a day ago

          Are you familiar with Linux kernel development process? Features can be merged only in two weeks long merge window. After the merge window closes, only fixes are merged for eight weeks. Rust binding can be fixed in that time. I don't see any problems.

          • tremon 21 hours ago

            That's a gross simplificaftion of the development process. Yes, new features are mostly merged in that two-weeks window -- but you're now talking about the Linux release management process more than its development.

            Before features are merged to Linus' release branch, pretty much all changes are published and merged to linux-next first. It is exactly here that build issues and conflicts are first detected and worked out, giving maintainers early visibility into changes that are happening outside their subsystem. Problems with the rust bindings will probably show up here, and the Rust developers will have ample time to fix/realign their code before the merge window even starts. And it's not uncommon for larger features (e.g. when they require coordination across subsystems) to remain in linux-next for more than one cycle.

          • Tomte a day ago

            And if no Rust developer has time or interest in those eight weeks? I don‘t claim that it can never work (or it cannot work in the common case), but as a hard rule it seems untenable.

            • josefx a day ago

              > And if no Rust developer has time or interest in those eight weeks?

              What if Linus decided to go on a two month long vacation in the middle of the merge window?

              > I don‘t claim that it can never work (or it cannot work in the common case), but as a hard rule it seems untenable.

              There are quite a few rust developers already involved, if they cannot coordinate that at least some are available during a release critical two month period then none of them should be part of any professional project.

            • Lutger a day ago

              I'm not familiar with kernel development, but what's the difference anyway with C code? If you change the interface of some part, any users of it will be broken Rust or not. It will require coordination anyway.

              It is customary for maintainers to fix _all_ usage of their code themselves? That doesn't seem scalable.

              • cozzyd a day ago

                Yes, that is the custom and is a key advantage of getting drivers in tree. I believe often the changes are applied automatically with a tool like coccinelle,

              • swiftcoder a day ago

                Keep in mind that actual breaking changes are by design incredibly rare in a project like the linux kernel. If you have a decade's-worth of device drivers depending on your kernel subsystem's API, you don't get to break them, you have to introduce a new version instead.

                • rcxdude a day ago

                  I think it's more a degree of how much effort it is to adjust to the new interface. If it's just 'added a new parameter to a function and there's an obvious default for existing code', then it'll (potentially mechanically) be applied to all the users. If it's 'completely changed around the abstraction and you need to think carefully about how to port your driver to the new interface', then that's something where there needs to be at least some longer-term migration plan, if only because there's not likely one person who can actually understand all the driver code and make the change.

                  (I do have experience with this causing regressions: someone updates a set of drivers to a new API, and because of the differences and lack of a good way to test, breaks some detail of the driver)

                • cozzyd a day ago

                  This isn't true; internal API's change all the time (e.g. adding extra arguments) Try running out of tree drivers on bleeding edge kernels to see for yourself.

                  • tialaramex a day ago

                    Of course, for trivial mechanical changes like adding an argument the Rust binding changes are also trivial. If you've just spent half an hour walking through driver code for hardware you've never heard of changing stuff like

                    quaff(something, 5, Q_DOOP) ... into ... quaff(something, 5, 0, Q_DEFAULT | (Q_DOOP << 4))

                    Then it's not beyond the wits of a C programmer to realise that the Rust binding

                    quaff(var1, n, maybe_doop) ... can be ... quaff(var1, n, 0, Q_DEFAULT | (maybe_doop << 4))

                    Probably the Rust maintainer will be horrified and emit a patch to do something more idiomatic for binding your new API but there's an excellent chance that meanwhile your minimal patch builds and works since now it has the right number and type of arguments.

                    • mananaysiempre 17 hours ago

                      > If you've just spent half an hour walking through driver code for hardware you've never heard of changing stuff [...].

                      Isn’t the point of Coccinelle that you don’t have to spend time walking through (C) driver code you’ve never heard of?

                      • tialaramex 6 hours ago

                        I have never used Coccinelle but yes, sort of. However, you're on the hook for the patch you submit, Coccinelle isn't a person so if you blindly send out a patch Coccinelle generated, without even eyeballing it, you should expect some risk of thrown tomatoes if (unknown to you) this utterly broke some clever code using your previous API in a way you hadn't anticipated in a driver you don't run.

            • sanxiyn a day ago

              If so kernel is released with broken Rust. That is the policy, and I am flabbergasted why everyone is going "that policy must not be literal".

              • Tomte a day ago

                Because if in a few years I have a device whose driver is written in Rust, a new kernel version might have simply dropped or broken my device driver, and I cannot use my device anymore. But sure, if R4L wants to stay a second-class citizen forever, it can still be acceptable.

                • adgjlsfhk1 a day ago

                  this isn't policy forever. it's policy for now. if r4l succeeds, the policy will change.

                • prmoustache a day ago

                  It is only a problem if you compile the kernel directly from the source tree instead of using the packages provided by your Linux distribution.

                • mschuster91 a day ago

                  > Because if in a few years I have a device whose driver is written in Rust, a new kernel version might have simply dropped or broken my device driver, and I cannot use my device anymore.

                  At least for Debian, all you need to do if you hit such a case is to simply go and choose the old kernel in the Grub screen. You don't even need to deal with installing an older package and dealing with version conflicts or other pains of downgrading.

                  • account42 a day ago

                    I hope you're not seriously suggesting this as a reasonable workflow.

                    • mschuster91 a day ago

                      For my server or laptop at home, sure. Why not. For servers in commercial fleets you should have staged rollouts as a policy anyway so if you do it right you shouldn't get hit.

                • darthrupert a day ago

                  Distros should be your firewall against that sort of thing. Just don't use a distro with a non-existent kernel upgrade process.

        • kelnos a day ago

          I think the way you do this is set things up so that no bits that are written in Rust are built by default, and make sure that the build system is set up such that Rust bindings for C code are only built when there's Rust code that's enabled that requires them.

          Then sure, some people who download a kernel release might enable a Rust driver, and that makes the build fail. But until Rust is considered a first-class, fully-supported language in the kernel, that's fine.

          In practice, though, I would expect that the Rust maintainers would fix those sorts of things up before an actual release is cut, after the 2-week merge window, during the period when only fixes are accepted. Maybe not every single time, but most of the time; if no one is available to fix a particular bit of breakage, then it's broken for that release. And that's fine too, even if it might be annoying to some users.

          • renox a day ago

            > I think the way you do this is set things up so that no bits that are written in Rust are built by default, and make sure that the build system is set up such that Rust bindings for C code are only built when there's Rust code that's enabled that requires them.

            Which is currently the only way possible and it will stay that way for a long time because remember that clang support less targets than gcc and gcc cannot compile Rust.

            Once gcc can /reliably/ compile Rust, then and only then Rust could be "upgraded" to a first class citizen in Linux. The "C-maintainers don't want to learn Rust" issue, will still be here of course, but there will already be many years of having a mixed code base..

          • Tomte a day ago

            I agree with all you say, but with longterm I really mean when we've arrived here

            > But until Rust is considered a first-class, fully-supported language in the kernel, that's fine

            A first-class language whose kernel parts may always break does seem unreasonable. I still think policy will have to change by that point.

        • tw04 a day ago

          Because nothing is forcing a distro to adopt a kernel that has items that are broken. Not a lot of folks out there are manually compiling and deploying standalone kernels to production systems.

          C can break rust, and Debian/Ubuntu/Redhat/suse/etc can wait for it to be fixed before pushing a new kernel to end users.

        • NewJazz a day ago

          You can merge it into your branch as e.g. the DMA maintainer, then the rust folk can pull your changes and fix the bindings. Maybe you as maintainer could give them a heads up and a quick consideration of the error.

        • account42 a day ago

          Yes, Rust as something optional doesn't really make sense long term. Either it will continue to only be used in nieche drivers (in which case why bother?) or eventually you need to build Rust code to have a usable kernel for common hardware. Any promises to the contrary need to be backed up with more than "trus me bro".

      • sanxiyn a day ago

        Why wouldn't it be merged? No Rust code is built unless CONFIG_RUST is on, and it is off by default. It won't be on by default for a long time.

        • robinei a day ago

          That's the theory. However, isn't likely that as things like the new Nova Nvidia driver is written in Rust, the things that depend on Rust are suddenly so important, that shipping with it disabled is unrealistic, even without a policy change. (I don't think this is bad)

          • danieldk a day ago

            Rust for Linux is currently experiment. If a larger number of widely-used drivers get written in Rust and developers prefer writing them in Rust over C, then I guess it's time to declare the experiment a success and flip the switch?

        • _blk a day ago

          How many kb ram is enough for everyone?

    • nxobject 2 days ago

      Linus is one of the few people who can forcefully argue the case for moderation, and I've recognized some of the lines I've used to shift really contentious meetings back into place. There's the "shot-and-chaser" technique (a) this is what needs to happen now for the conversation...

      "I respect you technically, and I like working with you[...] there needs to be people who just stand up to me and tell me I'm full of shit[...] But now I'm calling you out on YOURS."

      ...and (b) this is me recognizing that me taking charge of a conversation is a different thing than me taking control of your decisions:

      "And no, I don't actually think it needs to be all that black-and-white."

      (Of course Linus has changed over time for the better, he's recognized that, and I've learned a lot with him and have made amends with old colleagues.)

      • kelnos a day ago

        I really liked this reply from Torvalds. I've seen a lot of his older rants, and while I respect his technical achievements, it really turned me off on the guy himself. I was skeptical of his come-to-jesus moment back in 2018 (or whenever it was), but these days it's great to read his measured responses when there's controversy.

        It's really cool to see someone temper their language and tone, but still keep their tell-it-like-it-is attitude. I probably wouldn't feel good if I were Christoph Hellwig reading that reply, but I also wouldn't feel like someone had personally attacked me and screamed at me far out of proportion to what I'd done.

  • SeanAnderson 2 days ago

    The whole "I respect you technically, and I like working with you." in the middle of being firm and typing in caps is such a vibe shift from the Linus of a decade ago. We love to see it!

    • atq2119 2 days ago

      My impression is that this has always been part of the core of his character, but he had to learn to put it into writing.

      Contrast this to people who are good at producing the appearance of an upstanding character when it suits them, but being quite vindictive and poisonous behind closed doors when it doesn't.

      • Ballas a day ago

        Yeah, from my limited view point it really looks like Linus is a genuine person. He says what he thinks and there are no hidden agendas. That is very refreshing in current times.

    • account42 a day ago

      [flagged]

      • dxdm a day ago

        In my eyes, this sentence by Linus is as straightforward as it gets. But if you think it's corporate lie-speak, I wonder which words you'd interpret as genuine - because there has to be a way to get something like that across if you really mean it, or we're all doomed. :)

  • ksec 2 days ago

    I have always thought Linus may not like Rust or at least not Pro-Rust, and the only reason Rust is marching inside the Kernel is most of his close lieutenant are extremely pro rust. So there is this Rust experiment.

    But looking at all the recent responses it seems Rusted Linux is inevitable. He is Pro Rust.

    • dralley a day ago

      There was no reason to ever think otherwise. The experiment wouldn't have happened if he didn't want to try it and give it a certain amount of backing. He's been pretty vocal about his motivations.

      https://www.youtube.com/watch?v=OvuEYtkOH88&t=6m07s

      • Symmetry a day ago

        He certainly didn't have any trouble keeping C++ out.

        • __s a day ago

          Also in that thread is Greg KH: https://lore.kernel.org/rust-for-linux/2025021954-flaccid-pu...

          > C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.

        • miohtama a day ago

          Benefits C Vs. Rust are much more impactful than C Vs C++

          • Symmetry a day ago

            Oh certainly. And many fewer potentially dangerous complex corner cases than C++ brings.

    • the_duke a day ago

      I'm pretty sure there is significant pressure from corporate sponsors in the Linux foundation to make Rust happen. That includes Google, Microsoft, AWS, ...

      • ksec a day ago

        That may actually make a little more sense.

    • tonyhart7 a day ago

      "Rusted Linux is inevitable" for a good reason because Rust is objectively good language or better compared to C (Rust is designed what to fix flaws many language has)

      • smidgeon a day ago

        The real reason being the longing for the 5 hour kernel compile of yore.

        • dc443 a day ago

          Well he DOES have a threadripper now.

          • account42 a day ago

            And I'm sure the one thread used by the rust build will be blazing fast.

    • billfruit 2 days ago

      But why though? What about legacy systems, which may not have a rust toolchain? What about new architectures that may come up in the future?

      • kelnos a day ago

        Well there are a few ways to deal with this.

        - Systems not supported by Rust can use older kernels. They can also -- at least for a while -- probably still use current kernel versions without enabling any Rust code. (And presumably no one is going to be writing any platform-specific code or drivers for a platform that doesn't have a Rust toolchain.)

        - It will be a long time before building the kernel will actually require Rust. In that time, GCC's Rust frontend may become a viable alternative for building Linux+Rust. And any arch supported by GCC should be more-or-less easily targetable by that frontend.

        - The final bit is just "tough shit". Keep up, or get left behind. That could be considered a shame, but that's life. Linux has dropped arch support in the past, and I'm sure it will do so in the future. But again, they can still use old kernels.

        As for new architectures in the future, if they're popular enough to become a first-class citizen of the Linux kernel, they'll likely be popular enough for someone to write a LLVM backend for it and the glue in rustc to enable it. And if not, well... "tough shit".

      • barkingcat a day ago

        Thinking pragmatically, the legacy systems where there is no current rust toolchain most likely do not need the drivers and components that are being written in rust.

        Unless you somehow want to run Apple M1 GPU drivers on a device that has no rust toolchain ... erm...

        or you want to run a new experimental filesystem on a device that has no rust toolchain support?

        The answer to the "new and emerging platforms" question is pretty much the same as before: sponsor someone to write the toolchain support. We've seen new platforms before and why shouldn't it follow the same pathway? Usually the c compiler is donated by the company or community that is investing into the new platform (for example the risc-v compiler support for gcc and llvm are both getting into maturity status, and the work is sponsored by the developer community, various non-profit[1][2] and for-profit members of the ecosystem as well as from the academic community.)

        realistically speaking, it's very hard to come up with examples of the hypothetical.

        [1] https://github.com/lowRISC/riscv-llvm

        [2] https://lists.llvm.org/pipermail/llvm-dev/2016-August/103748...

      • remexre 2 days ago

        I suspect gcc-rs will be in good working order for a few years before any kernel subsystems require a Rust compiler to build; if the legacy system can't run a recent GCC, why does it need a much-newer kernel? (e.g., how would it cope with the kernel requiring an additional GCC extension, bumping the minimum standard version of C, etc.)

        I honestly suspect new architectures will be supported in LLVM before GCC nowadays; most companies are far more comfortable working with a non-GPL toolchain, and IMHO LLVM's internals are better-documented (though I've never added a new target).

      • lmm 2 days ago

        > What about legacy systems, which may not have a rust toolchain?

        Linux's attitude has always been either you keep up or you get dropped - see the lack of any stable driver API and the ruthless pruning of unmaintained drivers.

        > What about new architectures that may come up in the future?

        Who's to say they won't have a Rust compiler? Who's to say they will have a C one?

        • cwillu a day ago

          > Linux's attitude has always been either you keep up or you get dropped

          Gonna need a citation on that one. Drivers are removed when they don't have users anymore, and a user piping up is enough to keep the driver in the tree:

          For example:

             > As suggested by both Greg and Jakub, let's remove the ones that look
             > are most likely to have no users left and also get in the way of the
             > wext cleanup. If anyone is still using any of these, we can revert the
             > driver removal individually.
          
          https://lore.kernel.org/lkml/20231030071922.233080-1-glaubit...

          Or the x32 platform removal proposal, which didn't happen against after some users showed up:

             > > > I'm seriously considering sending a patch to remove x32 support from
             > > > upstream Linux.  Here are some problems with it:
             > >
             > > Apparently the main real use case is for extreme benchmarking. It's
             > > the only use-case where the complexity of maintaining a whole
             > > development environment and distro is worth it, it seems. Apparently a
             > > number of Spec submissions have been done with the x32 model.
             > >
             > > I'm not opposed to trying to sunset the support, but let's see who complains..
             >
             > I'm just a single user. I do rely on it though, FWIW.
             > […snipped further discussion]
          
          https://lore.kernel.org/lkml/CAPmeqMrVqJm4sqVgSLqJnmaVC5iakj...
        • rat87 a day ago

          Linux also cant be built by any minimal c compiler for obscure arch, it requires many gcc extensions. Its only because llvm added them that its also can be compiled with llvm

      • danieldk a day ago

        Curious: what widely-used (Linux) legacy systems do not have a Rust toolchain?

        In the end the question is whether you want to hold back progress for 99.9% of the users because there are still 200 people running Linux on an Amiga with m68k. I am pretty sure that the number of Linux on Apple Silicon users outnumbers m68k and some other legacy systems by at least an order of magnitude (if not more). (There are currently close to 50000 counted installs. [1])

        [1] https://stats.asahilinux.org

        • snvzz 15 hours ago

          There are enough of them that some (e.g. me) actually read this comment.

      • rcxdude a day ago

        I think that'll become a question if/when rust starts to move closer to core parts of the kernel, as opposed to platform-specific driver code. It's already been considered for filesystems which could in theory run on those systems, and the project seems to be OK with the idea that it's just not supported on those platforms. But that's likely a long way off, after there's a significant body of optional rust code in the kernel, and the landscape may already be quite different at that point (both in terms of if those systems are still maintained, and in terms of the kind of targets rust can support, especially if the gcc backend matures)

      • yxhuvud a day ago

        You don't get to run legacy systems with rust based drivers. You were not going to do that anyhow, so what is the issue, really?

      • evidencetamper 2 days ago

        Those are the tradeoffs, and it seems to me that Linux doesn't have to run in everything under the Sun as Doom ports do, and there might be other kernels that are better suited to such cases.

      • j-krieger a day ago

        You can compile Rust for Win98. They‘ll be fine.

      • gulbanana 2 days ago

        The legacy systems are not very important. The new ones will be supported.

      • tonyhart7 a day ago

        "But why though? What about legacy systems" their called legacy for a reason right

        I'm sorry you cant hinder kernel development just because some random guy/corpo cant use your shit in obscure system, like how can that logic is apply to everything

        if your shit is legacy then use legacy kernel

      • jfbfkdnxbdkdb a day ago

        Uhhhhh IIRC rust uses llvm under the hood so ... Change the back end and you are good?

        • rcxdude a day ago

          There are some platforms which linux supports that LLVM does not (and GCC does). There is quite a lot of effort in making a decent LLVM backend, and these older systems tend to have relatively few maintainers, so there may not be the resources to make it happen.

          • Pet_Ant a day ago

            > There is quite a lot of effort in making a decent LLVM backend, and these older systems tend to have relatively few maintainers

            Well, it also takes effort to be held back with outdated tools. Also, the LLVM backend doesn't have to be top-notch, just runnable. If they want to run legacy hardware they should be okay with running a legacy or taking the performance hit of a weaker LLVM back-end.

            Realistically

            At version 16[1], LLVM supports: * IA-32 * x86-64 * ARM * Qualcomm Hexagon * LoongArch * M68K * MIPS * PowerPC * SPARC * z/Architecture * XCore * others

            in the past it had support for Cell and Alpha, but I'm sure that the old code could be revived if needed, so how many users are effected here? Lets not forget the Linux dropped Itanium support and I'm sure someone is still running that somewhere.

            Looking through this list [2], what I see missing is Elbrus, PA-RISC, OpenRisc, and SuperH. So pretty niche stuff.

            [1] https://en.wikipedia.org/wiki/LLVM#Backends

            [2] https://en.wikipedia.org/wiki/List_of_Linux-supported_comput...

          • ddtaylor a day ago

            Aren't those already situations we use cross compiler for?

            • Filligree a day ago

              A cross compiler is just compiler backend for machine X running on machine Y. You still need the backend.

  • foota a day ago

    I don't know why he didn't write this email 3 weeks ago.

    • KingMob a day ago

      He wrote that he was hoping the email thread would improve the situation without his involvement, but that turned out not to be the case.

      • rcxdude a day ago

        It didn't seem super likely that this would be the case, because a lot of the contention was around what Linus specifically thought about it.

    • darthrupert a day ago

      Isn't it obvious? He thought about it.

  • AdmiralAsshat a day ago

    Boy that response would've been helpful like a week ago, before several key Rust maintainers resigned in protest due to Linus's radio silence on the matter.

    • geodel a day ago

      Oh several resigned. I thought all of them.

  • yolovoe 2 days ago

    Huh, thanks. Really good to know where Linus stands here. Seems to me like Linus is completely okay with introduction of Rust to the kernel and will not allow maintainers blocking its adoption.

    Really good sign. Makes me hopeful about the future of this increasingly large kernel

  • kragen a day ago

    This is indeed an excellent response and will hopefully settle the issues. Aside from the ones already settled by Linus's previous email, such as whether social media brigading campaigns are a valid part of the kernel development process.

  • chris_wot 2 days ago

    Thank god for common sense.

  • kennysoona a day ago

    Honestly I was waiting for a reply from Linux like this to put Hellwig in his place.

    > The fact is, the pull request you objected to DID NOT TOUCH THE DMA LAYER AT ALL.

    > It was literally just another user of it, in a completely separate subdirectory, that didn't change the code you maintain in _any_ way, shape, or form.

    > I find it distressing that you are complaining about new users of your code, and then you keep bringing up these kinds of complete garbage arguments.

    Finally. If he had been sooner maybe we wouldn't have lost talented contributors to the kernel.

    • kennysoona a day ago

      Ah I can't believe I misspelled Linus as Linux, seems like it should happen often enough but honestly I think I rarely make that typo.

      • sophacles a day ago

        I've made that mistake, and the inverse, often enough that I try to make sure to check I've written the correct word... and I still mess it up. Between the words being similar and the 'x' being right next to the 's' on US keyboard, it's bound to happen.

        ON the flip side - when I (and I suspect many others) read Linux where Linus should be written, I rarely even notice and never really care because I've been there.

        All this is a long winded way of saying: don't sweat it :) .

    • kombine a day ago

      > Finally. If he had been sooner maybe we wouldn't have lost talented contributors to the kernel.

      I feel that departure of the lead R4L developer was a compromise deliberately made to not make Hellwig feel like a complete loser. This sounds bad of course.

      • fdrs a day ago

        no lead R4L left because of the current situation. Marcan was the lead of Asahi Linux, not R4L. Wedson (which was one of the leads of R4L) left some time ago, before all of this, and his problem was not with Hellwig (or, at least it was not the one that brought the last drop).

        edit: whitespace

        • iknowstuff 5 minutes ago

          Hellwig had a spat with asahi lina back then as well.

      • dralley a day ago

        Marcan quitting wasn't a compromise, the resignation of a maintainer would never be used that way. Dude was just burnt out. I don't blame him at all, hopefully some time away from the situation does him some good.

dfawcus 3 days ago

The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain.

Possibly also that a significant portion of the suggested gain may be achievable via other means.

i.e. bounds checking and some simple (RAII-like) allocation/freeing simplifications may be possible without rust, and that those are (from the various papers arguing for Rust / memory safety elsewhere) the larger proportion of the safety bugs which Rust catches.

Possibly just making clang the required compiler, and adopting these extension may give an easier bang-for-buck: https://clang.llvm.org/docs/BoundsSafety.html

Over and above that, there seem to be various complaints about the readability and aesthetics of Rust code, and a desire not to be subjected to such.

  • viraptor 3 days ago

    > Possibly also that a significant portion of the suggested gain may be achievable via other means.

    Things like that have been said many times, even before Rust came around. You can do static analysis, you can put in asserts, you can use this restricted C dialect, you can...

    But this never gets wider usage. Even if the tools are there, people are going to ignore them. https://en.wikipedia.org/wiki/Cyclone_(programming_language) started 23 years ago...

    It took us decades to get to non executable stack and W^X and there are still occasional issues with that.

  • 0x457 3 days ago

    I think it's because C devs often think that they never make a mistake, so they see rust bringing on value.

    I had an argument about rust with a freebsd developer that had the same "I never make a mistake" attitude. I've made a PR to his project that fixes bugs that weren't possible in rust to being with. Not out of petty, but because his library was crashing my application. In fact, he tried to blame my rust wrapper for it when I raised an issue.

    • EasyMark a day ago

      I have definitely done such things out of pettiness. Sometimes people just attract your attention as deserving of an attempt to humble them. I hope people will humble me as well when my vociferousness outstrips my talent. It's good to be sent directly back to home every now and then.

      • dc443 a day ago

        What i don't get is why people gravitate toward trying to show off how many symbols they're able to manipulate in their brain without screwing something up.

        It's a computer. It does what it was instructed to do, all 50 million or so of them. To think you as a puny human have complete and utter mastery over it is pure folly every single time.

        As time goes on I become more convinced that the way to make progress in computing and software is not with better languages, sure, those are very much appreciated, since language has a strong impact on how you even think about problems, but it's more about tooling and how we can add abstractions to the software to leverage the computer we already got to alleviate the eye gouging complexity of trying to manage it all by trying to predict how it will behave with our pitiful neuron sacs.

        • 0x457 19 hours ago

          Don't forget about naming variables like it's a punchcard and every character matters.

  • mustache_kimono 3 days ago

    > The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain.

    Read the above email. Greg KH is pretty certain it is worth the gain.

    > Possibly also that a significant portion of the suggested gain may be achievable via other means.

    I think this is a valid POV, if someone shows up and does the work. And I don't mean 3 years ago. I mean -- now is as good a time as any to fix C code, right? If you have some big fixes, it's not like the market won't reward you for them.

    It's very, very tempting to think there is some other putatively simpler solution on the horizon, but we haven't seen one.

    > Over and above that, there seem to be various complaints about the readability and aesthetics of Rust code, and a desire not to be subjected to such.

    No accounting for taste, but I don't think C is beautiful! Rust feels very understandable and explicit to my eye, whereas C feels very implicit and sometimes inscrutable.

    • whstl a day ago

      > Read the above email. Greg KH is pretty certain it is worth the gain.

      I don't think GP or anyone is under the impression that Greg KH thinks otherwise. He's not the "some folks" referred here.

      • mustache_kimono a day ago

        > I don't think GP or anyone is under the impression that Greg KH thinks otherwise. He's not the "some folks" referred here.

        Glad for your keen insights.

  • kelnos a day ago

    > The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain. [..] Possibly also that a significant portion of the suggested gain may be achievable via other means.

    Sure, but opinions are always going to differ on stuff like this. Decision-making for the Linux kernel does not require unanimous consent, and that's a good thing. Certainly this Rust push hasn't been handled perfectly, by any means, but I think they at least have a decent plan in place to make sure maintainers who don't want to touch Rust don't have to, and those who do can have a say in how the Rust side of their subsystems look.

    I agree with the people who don't believe you can get Rust-like guarantees using C or C++. C is just never going to give you that, ever, by design. C++ maybe will, someday, years or decades from now, but you'll always have the problem of defining your "safe subset" and ensuring that everyone sticks to it. Rust is of course not a silver bullet, but it has some properties that mean you just can't write certain kind of bugs in safe Rust and get the compiler to accept it. That's incredibly useful, and you can't get that from C or C++ today, and possibly not ever.

    Yes, there are tools that exist for C to do formal verification, but for whatever reason, no one wants to use them. A tool that people don't want to use might as well not exist.

    But ultimately my or your opinion on what C and C++ can or can't deliver is irrelevant. If people like Torvalds and Kroah-Hartman think Rust is a better bet than C/C++-based options, then that's what matters.

  • mimd 3 days ago

    If you look at the CVE lists, about 70-80% of all c memory bugs are related to OOB Read and Write. Additionally, like rust, fbounds-safety can remove redundant checks if it can determine the bounds. My question is how likely can it be adopted in the kernel (likely high).

    I will need to read their conversations more to see if it's the underlying fear, but formalization makes refactoring hard and code brittle (ie. having to start from scratch on a formal proof after substantially changing a subsystem). One of the key benefits of C/Kernel have been their malleability to new hardware and requirements.

    • whytevuhuni 3 days ago

      > My question is how likely can it be adopted in the kernel (likely high).

      My guess is, it cannot. The way -fbounds-safety works, as far as I understand, is that it aborts the program in case of an out-of-bounds read or write. This is similar to a Rust panic.

      Aborting or panicking the kernel is absolutely not a better alternative to simply allowing the read/write to happen, even if it results in a memory vulnerability.

      Turning people's computer off whenever a driver stumbles on a bug is not acceptable. Most people cannot debug a kernel panic, and won't even have a way to see it.

      Rust can side-step this with its `.get()` (which returns an Option, which can be converted to an error value), and with iterators, which often bypass the need for indexing in the first place.

      Unfortunately, Rust can still panic in case of a normal indexing operation that does OOB access; my guess is that the index operation will quickly be fixed to be completely disallowed in the kernel as soon as the first such bug hits production servers and desktop PCs.

      Alternatively, it might be changed to always do buf[i % buf.size()], so that it gives the wrong answer, but stays within bounds (making it similar to other logic errors, as opposed to a memory corruption error).

      • mimd 3 days ago

        Yes, panicking in kernels is bad. I've followed the whole R4L fight about working around it.

        https://github.com/apple-oss-distributions/xnu/blob/main/doc...

        https://github.com/apple-oss-distributions/xnu/blob/main/doc...

        Upstream fbounds in xnu has options for controlling if it panics or is just a telemetry event. They are in a kernel situation and have the exact same considerations on trying to keep the kernel alive.

        • whytevuhuni 3 days ago

          Ah, thank you. If it can just do the equivalent of WARN_ON_ONCE(…) and continue, and the check wouldn’t be slow enough to make people disable it, then yeah, that sounds really good.

      • uecker 2 days ago

        For GCC I have a patch (maybe 10 lines of code) that emits a warning whenever the compiler inserts a trap. You could use a sanitizer, i.e. bounds checking or signed overflow, add code that turns the warning into an error, and so ensure that your code does not have a signed overflow or OOB.

        • estebank a day ago

          That sounds like a useful patch. Why didn't you upstream it?

          • uecker 9 hours ago

            I submitted it upstream but it was not accepted. There was a request to add a string argument that can be printed with the warning.

        • saagarjha 2 days ago

          Sanitizers don’t ship to production.

          • uecker a day ago

            The use case I described is not for production.

      • chlorion 2 days ago

        So out of bounds access leading to data loss and possible security vulnerability is better than crashing the kernel? That doesn't make sense to me.

        • NBJack 2 days ago

          One of those things might take your server/application/data out. The other is guaranteed.

          • int_19h a day ago

            One of those things might allow attacker to get access to data they should not have access to or to run arbitrary code on your server. The other does not.

          • fwip 2 days ago

            For many use cases, blowing up loudly is strongly preferable to silently doing the wrong thing. Especially in the presence of hostile actors, who are trying to use your out -of-bounds error for their own gain.

            • samus a day ago

              For many other use cases it is not. Imagine a smartphone randomly turning itself off. Nobody can possibly debug this.

  • throwawaymaths 2 days ago

    the problem is that Rust sucks the air out of the programming ecosystem because its proponents throw down the safety hammer, and research on other safe alternatives is slow. we do have an alternative low level memory safe language (Ada) but for whatever reason that's a nonstarter... there's no compelling reason that rust has to be the only way to achieve memory safety (much less in the OS domain where for example you don't have malloc/free so rust's default heap allocation can't be trivially used).

    it might do to wait until some other memory safe alternative appears.

    • dralley 2 days ago

      Linus doesn't like ADA much, and the talent pool is FAR smaller and also FAR older on average. The compelling reason to use Rust over other languages is precisely that it hit escape velocity where others failed to do so, and it did that partially by being accessible to less senior programmers.

      And I don't understand how you can go from opining that Rust shouldn't be the only other option, to opining that they should have waited before supporting Rust. That doesn't make sense unless you just have a particular animus towards Rust.

      • throwawaymaths 2 days ago

        yeah i do! rust does a lot of things right but protocols and proc macros are awful, as is raii.

        • kelnos a day ago

          I mean, that's just your opinion. I agree that proc macros are awful. I'm not sure what "protocols" are in reference to Rust. And as for RAII, I get that it can be contentious at times, but I generally appreciate its existence.

          But our opinions on this are irrelevant, as it turns out, unless you're actually Linus Torvalds hiding behind that throwaway account.

          • throwawaymaths a day ago

            sorry traits. i also program in elixir where its 'protocol/impl' not 'trait/impl'

    • kelnos a day ago

      > there's no compelling reason that rust has to be the only way to achieve memory safety

      I don't think anyone is saying that Rust is the only way to achieve that. It is a way to achieve it, and it's a way that enough people are interested in working on in the context of the Linux kernel.

      Ada just doesn't have enough developer momentum and community around it to be suitable here. And even if it did, you still have to pick one of the available choices. Much of that decision certainly is based on technical merits, but there's still enough weight put toward personal preference and more "squishy" measures. And that's fine! We're humans, and we don't make decisions solely based on logic.

      > it might do to wait until some other memory safe alternative appears.

      Perhaps, but maybe people recognize that it's already late to start making something as critical as the Linux kernel more safe from memory safety bugs, and waiting longer will only exacerbate the problem. Sometimes you need to work with what you have today, not what you hope materializes in the future.

    • lmm a day ago

      > research on other safe alternatives is slow

      It's slow because the potential benefits are slim and the costs of doing that research are high. The simple reality is that there just isn't enough funding going into that research to make it happen faster.

      > there's no compelling reason that rust has to be the only way to achieve memory safety

      The compelling reason is that it's the only way that has worked, that has reached a critical mass of talent and tooling availability that makes it suitable for use in Linux. There is no good Rust alternative waiting in the wings, not even in the kind of early-hype state where Rust was 15 years ago (Zig's safety properties are too weak), and we shouldn't let an imaginary better future stop us from making improvements in the present.

      > it might do to wait until some other memory safe alternative appears.

      That would mean waiting at least 10 years, and how many avoidable CVEs would you be subjecting every Linux user to in the meantime?

      • throwawaymaths 19 hours ago

        > The compelling reason is that it's the only way that has worked

        because it's hard enough that people don't try. and then they settle for rust. this is what i mean by "rust sucks the air out of the room".

        however, its clearly not impossible, for example this authors incomplete example:

        https://github.com/ityonemo/clr

        > That would mean waiting at least 10 years,

        what if it's not ten years, what if it could be six months? is or worth paying all the other downstream costs of rust?

        youre risking getting trapped in a local minimum.

        • lmm 15 hours ago

          > because it's hard enough that people don't try. and then they settle for rust. this is what i mean by "rust sucks the air out of the room".

          I think it's the opposite. Rust made memory safety without garbage collection happen (without an unusably long list of caveats like Ada or D) and showed that it was possible, there's far more interest in it now post-Rust (e.g. Linear Haskell, Zig's very existence, the C++ efforts with safety profiles etc.) than pre-Rust. In a world without Rust I don't think we'd be seeing more and better memory-safe non-GC languages, we'd just see that area not being worked on at all.

          > however, its clearly not impossible, for example this authors incomplete example:

          Incomplete examples are exactly what I'd expect to see if it was impossible. That kind of bolt-on checker is exactly the sort of thing people have tried for decades to make work for C, that has consistently failed. And even if that project was "complete", the hard part isn't the language spec, it's getting a critical mass of programmers and tooling.

          > what if it's not ten years, what if it could be six months?

          If the better post-Rust project hasn't appeared in the past 15 years, why should we believe it will suddenly appear in the next six months? And given that it's taken Rust ~15 years to go from being a promising project to being adopted in the kernel, even if there was a project now that was as promising as the Rust of 15 years ago, why should we think the kernel would be willing to adopt it so much more quickly?

          And even if that did happen, how big is the potential benefit? I think most fans of Rust or Zig or any other language in this space would agree that the difference between C and any of them is much bigger than the difference between these languages.

          > youre risking getting trapped in a local minimum.

          It's a risk, sure. I think it's much smaller than the risk of staying with C forever because you were waiting for some vaporware better language to come along.

        • unionpivo 8 hours ago

          Even if you are releasing such a solution today, it will take months/years to build knowledge and toolchains and best practices. Then have traind developers to be able to use it.

          > youre risking getting trapped in a local minimum.

          Or you are risking years of searching for perfect when you already have good enough.

    • eviks 2 days ago

      > for whatever reason that's a nonstarter... there's no compelling reason

      Before rejecting a reason you at least have to know what it is!

      • throwawaymaths 2 days ago

        ok... what's the compelling reason why rust's strategy has to be the only way to achieve memory safety?

        i think some people would argue RAII but you could trivially just make all deacquisition steps an explicit keyword that must take place in a valid program, and have something (possibly the compiler, possibly not) check that they're there.

        • solidsnack9000 a day ago

          I don't think a good conversation can be had if we start by arguing about whether or not "rust's strategy has to be the only way to achieve memory safety".

          There are other ways to achieve memory safety. Java's strategy is definitely a valid one; it's just not as suitable for systems programming. The strength of Rust's approach ultimately stems from its basis in affine types -- it is a general purpose and relatively rigorous (though not perfect, see https://blog.yoshuawuyts.com/linearity-and-control/) approach to managing resources.

          One implication of this is that a point you raised in a message above this one, that "rust's default heap allocation can't be trivially used", actually doesn't connect. All variables in Rust -- stack allocated, allocated on the heap, allocated using a custom allocator like the one in Postgres extensions -- benefit from affine typing.

          • throwawaymaths a day ago

            My point about "strategy" is not theoretical, it's implementation. why does your lifetime typing have to be in the compiler? it could be a part of a static checking tool, and get out of the way of routine development, and guarantee safety on release branches via CI for example.

            also you could have affine types without RAII. without macros, etc. etc.

            theres a very wide space of options that are theoretically equivalent to what rust does that are worth exploring for devex reasons.

            • solidsnack9000 a day ago

              First, let me say that you're bringing up some points that are orthogonal to "rust's strategy" for memory safety. Macros are not part of that strategy, and neither are many other ergonomic curiosities of Rust, and you are right to point out that those could be different without changing the core value proposition of Rust. There is plenty to say about those things, but I think it is better to focus on the points you raise about static analysis to start with.

              Type systems are a form of static analysis tool, that is true; and in principle, they could be substituted by other such tools. Python has MyPy, for example, which provides a static analysis layer. Coverity has long been used on C and C++ projects. However, such tools can not "get out of the way of routine development" -- if they are going to check correctness of the program, they have to check the program; and routine development has to respond to those checks. Otherwise, how do you know, from commit to commit, that the code is sound?

              The alternative is, as other posters have noted, that people don't run the static analysis tool; or run it rarely; both are antipatterns that create more problems relative to an incremental, granular approach to correctness.

              Regarding macros and many other ergonomic features of Rust, those are orthogonal to affine types, that is true; but to the best of my knowledge, Rust is the only language with tightly integrated affine types that is also moderately widely used, moderately productive, has a reasonable build system, package infrastructure and documentation story.

              So when you say "theres a very wide space of options that are theoretically equivalent to what rust does that are worth exploring for devex reasons.", what are those? And how theoretical are they?

              It's probably true, for example, that dependently typed languages could be even better from a static safety standpoint; but it's not clear that we can tell a credible story of improving memory safety in the kernel (or mail servers, database servers, or other large projects) with those languages this year or next year or even five years from now. It is also hard to say what the "devex" story will be, because there is comparatively little to say about the ecosystem for such nascent technologies.

              • throwawaymaths 13 hours ago

                there are highly successful projects out there that for example turn on valgrind and asan only in test or dev builds?

                > how do you know, from commit to commit, that the code is sound?

                these days its easy to turn full checks on every commit in origin; a pull request can in principle be rejected if any commit fails a test, and rewriting git history by squashing (annoying but not impossible) can get you past that if an intermediate failed.

                • solidsnack9000 10 hours ago

                  But how is this "out of the way of routine development"?

                  It seems like, at least part of the time, you're discussing distinct use cases -- for example, the quick scripts you mention (https://news.ycombinator.com/item?id=43132877) -- some of which don't require the same level of attention as systems programming.

                  At other times, it seems like you're arguing it would be easier to develop a verified system if you only had to run the equivalent of Rust's borrow checker once in awhile -- on push or on release -- but given that all the code will eventually have to pass that bar, what are you gaining by delaying the check?

            • samus a day ago

              Static analysis has the big disadvantage that it can and will be ignored.

              • throwawaymaths 19 hours ago

                thats fine. you dont need to run static analysis on a quick program that you yourself write that, say, downloads a file off the internet and processes it, and you're the only consumer.

                or a hpc workload for a physic simulation that gets run once on 400,000 cores, and if it doesnt crash on your test run it probably won't at scale.

                if youre writing an OS, you will turn it on. in fact, even rust ecosystem suggests this as a strategy, for example, with MIRI.

                • solidsnack9000 18 hours ago

                  Are you going to write a "quick program" in C, though? That is what we are comparing to, when we consider kernel development.

                  I wouldn't argue that Rust is a good replacement for Makefiles, shell build scripts, Python scripts...

                  An amazing thing about Rust, though, is that you actually can write many "quick programs" -- application level programs -- and it's a reasonably good experience.

                  • throwawaymaths 13 hours ago

                    > Are you going to write a "quick program" in C, though?

                    of course not, for kernel development. and in those cases, you WILL statically analyze.

                    • solidsnack9000 10 hours ago

                      But then what is the disagreement here, with regard to Rust and kernel development?

                • steveklabnik 19 hours ago

                  (Miri is not static analysis)

                  • throwawaymaths 19 hours ago

                    thats besides the point. its a unit outside of the compiler that exists to give you extra safety checks.

                    • steveklabnik 19 hours ago

                      Yes, I do agree that it doesn't change the shape of things, I was just trying to clarify a little detail, not say that you're incorrect. I have my own feelings about this but they're not super straightforward.

              • adamrezich a day ago

                How so? Because somebody forgot to run it before publishing a kernel release?

                • samus a day ago

                  Because they can and will be ignored on a large scale unless the false positive rate is pleasantly low. And more importantly there is a large amount of existing code that simply doesn't yet pass.

            • kelnos a day ago

              How do you know that those other options haven't been explored, and rejected?

              And remember that your gripes with Rust aren't everyone's gripes. Some of the things you hate about Rust can be things that other people love about Rust.

              To me, I want all that stuff in the compiler. I don't want to have to run extra linters and validators and other crap to ensure I've done the right thing. I've found myself so much more productive in languages where the compiler succeeding means that everything that can (reasonably) be done to ensure correctness according to that language's guarantees has been checked and has passed.

              Put another way, if lifetime checking was an external tool, and rustc would happily output binaries that violate lifetime rules, then you could not actually say that Rust is a memory-safe language. "Memory-safe if I do all this other stuff after the compiler tells me it's ok" is not memory-safe.

              But sure, maybe you aren't persuaded by what I've said above. So what? Neither of us are Linux kernel maintainers, and what we think about this doesn't matter.

              • throwawaymaths 13 hours ago

                you're arbitrarily drawing the line where memory safe is. i could say rust is memory unsafe because it allows you to write code in an unsafe block. or you could lose memory safety if you use any sort of ~ECS system or functionally lose memory "safe"ty if you turn a pointer lookup into an index into an array (a common strategy for performance, if not to trick the borrow checker).

                what you really should care about is: is your code memory safe, not is your language memory safe.

                and this is what is so annoying about rust evangelists. To rust evangelists it's not about the code being memory safe (for example you bet your ass SEL4 is memory safe, even if the code is in C)

                • kobebrookskC3 3 hours ago

                  you wanna verify all your c just for memory safety? i bet you if you actually tried to verify c for memory safety, you would come screaming back to rust.

                  and also seL4 is about 10k lines of code, designed around verification, sequential, and already a landmark achievement of verification. linux is like 3 orders of magnitude more code, not designed around verification, and concurrent.

        • EasyMark a day ago

          sometimes things just become the thing to use from momentum. I've personally never been that picky about languages. I code in whatever they pay me to code in. I still code most of my personal projects in c++ and python though.

          • throwawaymaths a day ago

            sounds like a recipe for stockholm syndrome. dont settle! demand more from your programming languages. losing sleep at 2am because you cant figure out a bug in prod is not worth it!

            • baq a day ago

              Not OP, but: If they aren’t paying for fixing bugs at 2am I’m not fixing bugs at 2am. Simple :)

    • caspper69 2 days ago

      After all the Ada threads last week, I read their pdf @ Adacore's site (the Ada for Java/C++ Programmers version), and there were a lot of surprises.

      A few that I found: logical operators do not short-circuit (so both sides of an or will execute even if the left side is true); it has two types of subprograms (subroutines and functions; the former returns no value while the latter returns a value); and you can't fall through on the Ada equivalent of a switch statement (select..case).

      There are a few other oddities in there; no multiple inheritance (but it offers interfaces, so this type of design could just use composition).

      I only perused the SPARK pdf (sorry, the first was 75 pages; I wasn't reading another 150), but it seemed to have several restrictions on working with bare memory.

      On the plus side, Ada has explicit invariants that must be true on function entry & exit (can be violated within), pre- and post- conditions for subprograms, which can catch problems during the editing phase, and it offers sum types and product types.

      Another downside is it's wordy. I won't go so far as to say verbose, but compared to a language like Rust, or even the C-like languages, there's not much shorthand.

      It has a lot of the features we consider modern, but it doesn't look modern.

      • jcmoyer 2 days ago

        > logical operators do not short-circuit (so both sides of an or will execute even if the left side is true)

        There are two syntaxes: `and` which doesn't short circuit, and `and then` which does. Ditto for `or` and `or else`.

        • bonzini a day ago

          Interestingly Rust uses the same convention for some methods: Option has "and_then", "or_else", and also a distinction between "unwrap_or" and "unwrap_or_else".

        • int_19h a day ago

          Coincidentally, this is the same as C and C++: you have & and && and then you have | and ||. We think of & and | as something that's only useful for bit twiddling, but when you apply them to boolean values, the semantics are exactly that of a non-short-circuiting boolean operator.

        • caspper69 a day ago

          Thanks for the heads up.

      • int_19h a day ago

        > you can't fall through on the Ada equivalent of a switch statement (select..case).

        C is actually more of an odd one here, and the fallthrough semantics is basically a side effect of it being a glorified computed goto (with "case" being literally labels, hence making things like Duff's device a possibility). Coincidentally, this is why it's called "switch", too - the name goes back all the way to the corresponding Algol-60 construct.

  • qalmakka a day ago

    > the readability and aesthetics of Rust code

    I've been writing C/C++ code for the last 16 years and I think a lot of mental gymnastics is required in order to call C "more readable" than Rust. C syntax is only "logical" and "readable" because people have been writing it for the last 60 years, most of it is literally random hacks made due to constraints ({ instead of [ because they thought that array would be more common than blocks, types in front of variables because C is just B with types, wonky pointer syntax, ...). It's like claiming that English spelling is "rational" and "obvious" only because it's the only language you know IMHO.

    Rust sure has more features but it also way more regular and less quirky. And it has real macros, instead of insane text replacement, every C project over 10k lines I've worked on has ALWAYS had some insane macro magic. The Linux kernel itself is full of function-like macros that do any sort of magic due to C not having any way to run code at compile-time at all.

  • petesergeant a day ago

    > The impression I get from simply reading these various discussions, is that some folks are not convinced that the pain from accepting Rust is worth the gain.

    You're correct that there is a honest-to-god split of opinion by smart people who can't find a consensus opinion. So it's time for Linus to step up and mandate and say "discussion done, we are doing x". No serious organization of humans can survive without a way to break a deadlock, and it seems long past the time this discussion should have wrapped up with Linus making a decree (or whatever alternative voting mechanism they want to use).

mmooss a day ago

Regarding the Linux development process: How do Linux maintainers / contributors have time to read these long threads of long posts? Just this one discussion looks like it would take hours to read and these are busy developers.

How does it work? Are there only a few threads that they read? Which ones?

  • meibo a day ago

    Not sure if you've made this experience yet, but the one thing I've learned about being an involved maintainer of a sizeable open source project is that it's mostly about communicating.

    You'll be talking to a lot of people and making sure that everyone is on the same page, and that's what's going on here, hopefully. If you just shut up and write code all day, you probably aren't gonna get there and there will be conflict, especially if other people are touching your systems and aren't expecting your changes.

    • buzzardbait a day ago

      In the 20 years that I've been working on sizeable closed source projects, it's also mostly about communicating. Even if the team is small, it's mostly about communicating. Occasionally some developers don't want to communicate, and prefer to shut up and write code all day, like you said. That usually creates more conflict due to different expectations, regardless of how brilliant you are.

      • theshrike79 a day ago

        And if the team is remote and distributed (like the Linux kernel team has been pretty much always), communication and documentation is even more important.

        There is no "silent information" being distributed by random conversations around the office. If something is not explicitly written down, it did not happen and doesn't exist.

  • froh a day ago

    tooling and practice.

    First you use a tool designed around following mailing lists. text based mail readers. they represent the threads in a compact form, allow to collapse threads and have them only resurrect if new content shows up. they also allow for pattern based tagging and highlighting of content "relevant to you", senders of interest, direct mentioning of your name/email address, ... and minor UX niceties like hiding duplicate subject in responses (Re: yadda <- we know that, it's at the top of the thread already)

    such tool ergonomics allow you to focus on what's relevant to you

    Hint: Outlook doesn't cut it.

    And then with the right tool you practice, you learn how to skim the thread view like you maybe learned to skim the newspaper for relevant content.

    and with the right tool and practice in place you can readily skim mailing lists during the day when you feel like it and can easily catch up after vacation.

  • baq a day ago

    Writing code in large teams is maybe 20% of time spent working, guesstimating on average. There are great engineers writing absolutely nothing mergable for weeks.

  • mburns a day ago

    This hacker news post has more comments than the mailing list thread that inspired it. A roughly comparable amount of text. It’s a lot, but certainly doable.

    That + having a couple decades to refine your email client setup goes a long way.

  • kelnos a day ago

    I imagine this works just like it works for anyone: they prioritize what's important to them, and if they don't get to the things lower on their priority list, that's just life.

    I don't think it would be necessary for most kernel developers to read that entire email thread. I feel like I could get through the entire thing in a half hour by ruthlessly skimming and skipping replies that don't tell me anything I care about, and only reading in full and in detail the handful or two of emails that really interest me.

    And as a sibling says, a huge part of software development, especially when you're working with a large community of distributed developers, is communication. I expect most maintainers spend the majority of their time on communication, and less on writing code. And a lot of the contributors who write a lot of kernel code probably don't care too much about a lot of the organizational/policy-type discussion that goes on.

  • jrootabega a day ago

    One possibility is that they only use a small amount of time, mental effort, and context size to go over all of the messages at a relatively shallow level. If there is anything that lets them send the ball back into somebody else's court without fully digesting a message or thread, they will go for it. That other person will then be responsible for the effort of replying at all, thinking about the subject matter, accounting for other peoples' messages, and composing the reply message itself. They also probably further minimize reading intellectual subthreads, and instead keep practical, concrete items at the top of their stack.

    Overall, this means that they will sometimes err on the side of being deaf or dismissive.

  • yxhuvud a day ago

    First of all, this is what? A month or two of posts? Spreading the time to read out over that make the cost almost go away. You can do it while drinking coffee or whatever, and when reading in better formats (say, in your inbox), you will see what a mail is about and then skip it if you are not interested in this particular tangent.

    But also, don't expect this kind of flame war to be a regular thing. Most discussions are a lot smaller and involve few people.

    • mmooss 20 hours ago

      > First of all, this is what? A month or two of posts?

      It's 3 days of posts, according to the dates in the outline structure at the bottom.

typ 2 days ago

In my honest opinion, it's not a good idea to mix two programming languages into the same monolithic codebase side by side. It would be less problematic if used for different purposes or layers, like frontend and backend. But we know it still creates unpleasant friction when you have to work on both sides on your own. Otherwise, it creates technical AND communication friction if the C devs and Rust devs work separately. As someone who works with embedded systems at times, I can imagine the pain that you have to set up two toolchains (with vastly different build infra beasts like GNU Make and Cargo) and the prolonged build time of CI and edit-compile-run debugging cycles given the notorious slow compile time of the Rust/LLVM compiler.

  • dralley 2 days ago

    >It would be less problematic if used for different purposes or layers, like frontend and backend.

    Good news! At the present moment, Rust is only being used for drivers. Who knows if that will change eventually, but it's already the case that the use case is contained.

  • cozzyd 2 days ago

    Fortunately the rust inside Linux doesn't use cargo and uses the normal kernel build system.

  • kelnos a day ago

    Greg K-H's email acknowledges that mixed-language projects are difficult to deal with. But he makes a good mitigating point: they are all Linux kernel maintainers and developers, and they all already work on very hard things. They can handle this.

    • account42 a day ago

      Sound like hubris. If you are already working at the human limit you definitely don't want to add any additional complexity.

      • astrobe_ a day ago

        It reminds of the "broken window theory" [1] in the sense that when two windows are broken, breaking a third one seems not to matter (I of course don't suggest that Rust programmers are criminals; I have no proof yet ;-). It is a trap one can easily fall into, e.g. "this method is already huge, adding a couple of lines to it doesn't make a difference".

        [1] https://en.wikipedia.org/wiki/Broken_windows_theory

        • vacuity a day ago

          You are right, but this is being hedged against the advantages of adding Rust. I daresay no one would agree to more work for no perceived benefit. If you want to contest this tradeoff, that's a different tack.

  • hgs3 a day ago

    > It would be less problematic if used for different purposes or layers, like frontend and backend.

    Wouldn't a microkernel architecture shine here? Drivers could, presumably, reside in their own projects and therefore be written in any language: Rust, Zig, Nim, D, whatever.

  • saidinesh5 2 days ago

    The Rust in kernel doesn't use Cargo, does it? (Genuine question - someone do confirm)

    That being said, it depends on how well the two languages integrate with each other - I think.

    Some of the best programming experience I had so far was when using Qt C++ with QML for the UI. The separation of concerns was so good, QML was really well suited for what it was designed for - representing the Ui state graph and scripting their interactions etc ... And it had a specific role to fill.

    Rust in the kernel - does it have any specific places where it would fit well?

    • AlotOfReading a day ago

      Yes, cargo is involved. R4L currently works by invoking kbuild to determine the CFLAGS, then passes them to bindgen to generate the rust kernel bindings. It then invokes cargo under the hood, which uses the bindings and the crate to generate a static lib that the rest of the kernel build system can deal with.

  • eviks 2 days ago

    > the prolonged build time of CI and edit-compile-run debugging cycles

    Does Linux kernel development have hot reload on the C side as a comparison?

    • aaronmdjones a day ago

      It used to, until Oracle bought it out. It is not usable for changes to the ABI though; only kernel functions. The use case was hot-patching a running kernel to fix a security vulnerability in e.g. a device driver, but it could be used to modify almost any function.

      https://en.wikipedia.org/wiki/Ksplice

  • Jean-Papoulos a day ago

    Sure it's hard. But the security rewards are worth it.

AshamedCaptain 2 days ago

I do not understand how this is supposed to work in practice. If there are "Rust bindings" then the kernel cannot have a freely evolving internal ABI, and the project is doomed to effectively split into the "C" core side and the "Rust" side which is more client oriented. Maybe it will be a net win for Linux for finally stabilizing the internal APIs, and even open the door to other languages and out-of-tree modules. On the other hand, if there are no "Rust bindings" then Rust brings very little to the table.

  • mustache_kimono 2 days ago

    > I do not understand how this is supposed to work in practice. If there are "Rust bindings" then the kernel cannot have a freely evolving internal ABI...

    Perhaps I misunderstand your argument, but it sounds like: "Why have interfaces at all?"

    The Rust bindings aren't guaranteed to be stable, just as the internal APIs aren't guaranteed to be stable.

  • dralley 2 days ago

    ABI is irrelevant. Only external APIs/ABIs are frozen, kernel-internal APIs have always been allowed to change from release to release. And Rust is only used for kernel-internal code like drivers. There's no stable driver API for linux.

    • vlovich123 2 days ago

      External kernel APIs/ABIs are not frozen unless by external you only mean user space (eg externally loaded kernel modules try to keep up with dkms but source level changes require updates to the module source, often having to maintain multiple versions in one codebase with ifdef’s to select different kernel versions)

      • dralley 2 days ago

        Userspace, yes.

  • TransAtlToonz 2 days ago

    I don't understand why rust bindings imply a freezing (or chilling) of the ABI—surely rust is bound by roughly the same constraints C is, being fully ABI-compatible in terms of consuming and being consumed. Is this commentary on how Rust is essentially, inherently more committed to backwards compatibility, or is this commentary on the fact that two languages will necessarily bring constraints that retard the ability to make breaking changes?

    • AshamedCaptain 2 days ago

      Obviously the latter, which is already the point of contention that has started this entire discussion.

      • TransAtlToonz 2 days ago

        Can you explain why you think this? I don't understand the reasoning and it's certainly not "obvious". There's certainly no technical reason implying this, so is this just resistance to learning rust? C'mon, kernel developers can surely learn new tricks. This just seems like a defeatist attitude.

        EDIT: The process overhead seems straightforwardly worth it—rust can largely preserve semantics, offers the potential to increase confidence in code, and can encourage a new generation of contribution with a faster ramp-up to writing quality code. Notably nowhere here is a guarantee of better code quality, but presumably the existing quality-guaranteeing processes can translate fine to a roughly equivalently-capable language that offers more compile-time mechanisms for quality guarantees.

        • AshamedCaptain 2 days ago

          You phrased it rather well -- "increased constraints will retard the ability to make breaking changes". You are adding a second layer of abstraction that brings very little generalization, but still doubles the mental load; there's no way it doesn't significant put additional pressure when making breaking changes. The natural reaction is that there will be less such breaking changes and interfaces will ossify. One can even argue this is what has already happened here.

          In addition, depending on the skill of the "binding writer", the second set of interfaces may simply be actually easier to use (and generally true, since the rust bindings are actually designed instead of evolved organically). This is yet another mental barrier. There may not even be a point to evolving one interface, or the other. Which just further contributes to splitting the project into two worlds.

          • kelnos a day ago

            > The natural reaction is that there will be less such breaking changes and interfaces will ossify. One can even argue this is what has already happened here.

            I don't think I'd agree with that. Current kernel policy is that the C interfaces can evolve and change in whatever way they need to, and if that breaks Rust code, that's fine. Certainly some subsystem maintainers will want to be involved in helping fix that Rust code, or help provide direction on how the Rust side should evolve, but that's not required, and C maintainers can pick and choose when they do that, if at all.

            Obviously if Rust is to become a first-class, fully-supported part of the kernel, that policy will eventually change. And yes, that will slow down changes to C interfaces. But I think suggesting that interfaces will ossify is an overreaction. The rate of change can slow to a still-acceptable level without stopping completely.

            And frankly I think that when this time comes, maintainers who want to ignore Rust completely will be few and far between, and might be faced with a choice to either get on board or step down. That's difficult and uncomfortable, to be sure, but I think it's reasonable, if it comes to pass.

          • TransAtlToonz 2 days ago

            > You are adding a second layer of abstraction that brings very little generalization

            Presumably, this is an investment in replacing code written in C. There's no way around abstraction or overhead in such a venture.

            > there's no way it doesn't significant put additional pressure when making breaking changes

            This is the cost of investment.

            > The natural reaction is that there will be less such breaking changes and interfaces will ossify.

            A) "fewer", not "less". Breaking changes are countable.

            B) A slower velocity of changes does not imply ossification. Furthermore, I'm not sure this is true—the benefits of formal verification of constraints surrounding memory-safety seems as if it would naturally lead to long-term higher velocity. Finally, I can't speak to the benefits of a freely-breakable kernel interface (I've never had to maintain a kernel for clients myself, thank god) but again, this seems like a worthwhile short-term investment for long-term gain.

            > In addition, depending on the skill of the "binding writer" (and generally, since the rust bindings are actually designed instead of evolving organically), the second set of interfaces may simply be actually easier to use. There may not even be a point to evolving one interface, or the other. Which just further contributes to splitting the project.

            Sure, this is possible. I present two questions, then: 1) what is lost with lesser popularity of the C interface with allegedly less stability, and 2) is the stability, popularity, and confidence in the new interface worth it? I think it might be, but I have no clue how to reason about the politics of the Linux ABI.

            I have never written stable kernel code, so I don't have confident guidance myself. But I can say that if you put a kernel developer in front of me of genius ability, I would still trust and be more willing to engage with rust code. I cannot conceive of a C programmer skilled enough they would not benefit from the additional tooling and magnification of ability. There seems to be some attitude that if C is abandoned, something vital is lost. I submit that what is lost may not be of technical, but rather cultural (or, eek, egoist), value. Surely we can compensate for this if it is true.

            EDIT, follow-up: if an unstable, less-used interface is desirable, surely this could be solved in the long term with two rust bindings.

            EDIT2: in response to an aunt comment, I am surely abusing the term "ABI". I'm using it as a loose term for compatibility of interfaces at a linker-object level.

            • dralley 2 days ago

              >Presumably, this is an investment in replacing code written in C. There's no way around abstraction or overhead in such a venture.

              Nobody is proposing replacing code right now. Maybe that will happen eventually, but it's off limits for now.

              R4L is about new drivers. Not even kernel subsystems, just drivers, and only new ones. IIRC there is a rule against having duplicate drivers for the same hardware. I suppose it's possible to rewrite a driver in-place, but I doubt anyone plans to do that.

              • surajrmal a day ago

                There is a binder driver rewrite in rust. Companies who care are certainly rewriting drivers. If there is pushback in upstreaming them that will cause a lot of noise.

              • TransAtlToonz a day ago

                [flagged]

                • kelnos a day ago

                  > Why not? That's the really juicy part of the pitch.

                  For now, it's because for logistical and coordination reasons, Rust code is allowed to be broken by changes to C code. If subsystems (especially important ones) get rewritten in Rust, that policy cannot hold.

                  > yes i get there are linux vets we need to be tender with. This shouldn't obstruct what gets committed.

                  Not sure why you believe that. We're not all robots. People need to work together, and pissing people off is not a way to facilitate that.

                  > if this is what linux conflict resolution looks like, how the hell did the community get anything done for the last thirty years?

                  Given that they've gotten a ton done in 30 years, I would suggest that either a) your understanding of their conflict-resolution process is wrong, or b) your assertion that this conflict-resolution process doesn't work is wrong.

                  I would suggest you re-check your assumptions.

                  > You quarter-assed this reply so I'm sure your next one's gonna be a banger.

                  Please don't do this here. There's no reason to act like this, and it's not constructive, productive, interesting, or useful.

            • biorach a day ago

              > A) "fewer", not "less". Breaking changes are countable.

              this just makes you look pedantic and passive aggressive

  • wffurr a day ago

    From what I have read, the intent seems to be that a C maintainer can make changes that break the Rust build. It’s then up to the Rust binding maintainer to fix the Rust build, if the C maintainer does not want to deal with Rust.

    The C maintainer might also take patches to the C code from the Rust maintainer if they are suitable.

    This puts a lot of work on the Rust maintainers to keep the Rust build working and requires that they have sufficient testing and CI to keep on top of failures. Time will tell if that burden is sustainable.

    • kevincox a day ago

      > Time will tell if that burden is sustainable.

      Most likely this burden will also change over time. Early in the experiment it makes sense to put most of the burden on the experimenters and avoid it from "infecting" the whole project.

      But if the experiment is successful then it makes sense to spread the workload in the way that minimizes overall effort.

      • estebank a day ago

        It took me a while to understand the conflict until this dawned on me. It doesn't matter how many assurances the R4L team gives that they are on the hook for keeping up with breaking changes during the experiment, some maintainers were dismissive of the project altogether, because of the project is successful, then they have to care. It wasn't until recently that we are all operating on different definitions of success. If your definition of success is "it proves that it's possible to get it working", the project succeeded ages ago, which means that you're running out of time to stop the project if you really don't want to ever have to care about it. But that's not the definition of a successful experiment, because otherwise it would already have been declared. One potential definition of success is "all of the tooling necessary is there, it's reliable, the code quality is higher than what was there before, and the number of defects in the new code is statistically lower". If that is the goal, then the time where the maintainers don't need to fix bindings as part of refractors is pushed further into the future. But that success goal also implies that everything is in place to be minimally disruptive to maintainers already.

        If it were me, I would have started building the relationships now with the R4L team to "act as-if" Rust is here to stay and part of the critical path, involving them when refractors happen but without the pressure to have to wait for them before landing C changes. That way you can actually exercise the workflow and get real experience on what the pain might be, and work on improving that workflow before it becomes an issue. Arguably, that is part of the scope of the experiment!

        The fear that everyone from R4L might get up and leave from one day to the next, leaving maintainers with Rust code they don't understand, is the same problem of current subsystem maintainers getting up and leaving from one day to the next leaving no-one to maintain that code. The way to protect against that is to grow the team's, have a steady pipeline of new blood (by fostering an environment that welcomes new blood and encourages them to stick around) and have copious amounts of documentation.

  • saagarjha 2 days ago

    You can rewrite a lot of stuff in Rust and offer C bindings to it.

    • lucasyvas 2 days ago

      Rust is better used from the inside out. It’s just more controversial.

mustache_kimono 3 days ago

    But for new code / drivers, writing them in Rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? -- greg k-h
  • uecker 3 days ago

    The question is to what extend this is true - given that Rust programmers also make stupid mistakes (e.g. https://rustsec.org/advisories/RUSTSEC-2023-0080.html) that look exactly like C bugs. Not that I think Rust does not have advantages in terms of safety, but probably not as much as some people seem to believe when making such arguments. The other question is at what cost it comes.

    • oconnor663 2 days ago

      Granted, there are plenty of people who don't understand these issue very well who think "Rust = no bugs". Of course they're wrong. But that said, this CVE is an interesting example of just how high the bar is that Rust sets for correctness/security. The bug is that, if you pass 18446744073709551616 as the width argument to this array transpose function, you get undefined behavior. It's not clear whether any application has ever actually done this in practice; the CVE is only about how it's possible to do this. In most C libraries, on the other hand, UB for outrageous size/index parameters would be totally normal, not even a bug, much less a CVE. If an application screwed it up, maybe you'd open a CVE against the application.

      • uecker a day ago

        Many exploits work because an attacker tweaks the circumstances to some unlikely situation.

    • saghm 2 days ago

      I'd argue that he addresses this with the two paragraphs immediately preceding the one quoted above:

      > As someone who has seen almost EVERY kernel bugfix and security issue for the past 15+ years (well hopefully all of them end up in the stable trees, we do miss some at times when maintainers/developers forget to mark them as bugfixes), and who sees EVERY kernel CVE issued, I think I can speak on this topic.

      The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)

      > I'm all for moving our C codebase toward making these types of problems impossible to hit, the work that Kees and Gustavo and others are doing here is wonderful and totally needed, we have 30 million lines of C code that isn't going anywhere any year soon. That's a worthy effort and is not going to stop and should not stop no matter what.

      > But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this?

    • tptacek 2 days ago

      This is a false tradeoff. The big win for Rust in the kernel is for new code. Bug density and impact is highest in newer code (it may, according to recent research, actually decay exponentially). There's no serious suggestion that existing code get forklifted out for new Rust code, only that the project create a streamlined affordance for getting new drivers into the kernel in Rust rather than C.

    • kelnos a day ago

      Rust doesn't claim to protect you from integer overflow bugs, so I'm not sure what you're trying to prove by linking to that security advisory.

      But it does protect against memory leaks, use-after-free, and illegal memory access. C does not.

      > The other question is at what cost it comes.

      I think I trust the kernel developers to decide for themselves if that cost is worth it. They seem to have determined it is, or at least worth it enough to keep the experiment running for now.

      Greg K-H even brings this up directly in the linked email, pointing out that he has seen a lot of bugs and security issues in the kernel (all of them that have been found, when it comes to security issues), and knows how many of them are just not possible to write in (safe?) Rust, and believes that any pain due to adopting Rust is far outweighed by these benefits.

      • fc417fc802 a day ago

        > But it does protect against ... illegal memory access

        To be clear, the linked CVE is an example of illegal memory access as a result of integer overflow. Of course, the buggy code involves an unsafe block so ... everything working as advertised. It's certainly a much higher bar for safety and correctness than C ever set.

    • saagarjha 2 days ago

      Are these people on the room with us right now? Come on, man. This is a horrible argument to make. Rust has these problems happen exceptionally rarely, in clearly marked places, and when they get fixed they strengthen all the code that relies on it. In C you have these bugs happen every hundred lines of code. It’s not even worth comparing. This is the programming equivalent of bringing up shark attacks versus car crashes.

      • uecker a day ago

        Sorry, that is not obvious to me. I agree that Rust has an advantage. Still to me it seems there is a chain of arguments where each argument contains a bit of exaggeration: Improving safety in Linux kernel code is super extremely important, memory safety is the most important aspect, Rust gives you basically full memory safety, etc. Each such statement is true to some extend but exaggerated in my opinion. At the same time alternatives to improving safety in C code are downplayed and it is presented has hopelessly bad. So if I take into account all these aspects, then overall I find the full story not as convincing anymore.

    • int_19h a day ago

      If I understand correctly, this particular issue that you've linked to can only trigger a buffer overflow because the implementation of transpose() is written in unsafe Rust.

      • uecker a day ago

        Yes. So what? That doesn't count then?

    • pajko 3 days ago
      • steveklabnik 3 days ago

        Some of these CVEs only exist because Rust takes security seriously. There was a filesystem bug: https://blog.rust-lang.org/2022/01/20/cve-2022-21658.html

        This impacted C++'s standard library as well, but since the standard says it's undefined behavior, they said "not a bug" and didn't file CVEs.

        Nobody believes that Rust programs will have zero bugs or zero security vulnerabilities. It's that it can significantly reduce them.

        • BlackFly a day ago

          To me, this attitude of the rust community is another benefit of rust: there is a general commitment that idiomatic rust code handles and exposes when things can go wrong.

      • merb 2 days ago

        Just skimming the first few entries:

        - most often are ub in binding code between rust and language x

        - if not binding code the severity is often below 5, which is most often not a bug that will affect you

        - exceptions are code with heavy async usage and user input handling (which rust never advertises to fix and is common in all languages, even ones with gc)

voidr 2 days ago

We should have seen this post before Hector Martin got so fed up that he decided to resign(to be fair, he probably had other issues as well that contributed).

I was very confused by the lack of an actual response from Linus, he only said that social media brigading is bad, but he didn't give clarity on what would be the way forward on that DMA issue.

I have worked in a similar situation and it was the worst experience of my work life. Being stonewalled is incredibly painful and having weak ambiguous leadership enhances that pain.

If I were a R4L developer, I would stop contributing until Linus codifies the rules around Rust that all maintainers would have to adhere to because it's incredibly frustrating to put a lot of effort into something and to be shut down with no technical justification.

  • dralley 2 days ago

    Clarity was apparently provided privately. However, I have to say that a public statement would have been better. I can only imagine how demoralizing it is for the R4L contributors to watch their work being trashed in public and the leadership is only privately willing to give reassurances. Not to mention bad for recruitment.

    • preisschild a day ago

      > Clarity was apparently provided privately

      Only to Hedwig if I understood correctly

  • zamalek 2 days ago

    You know, the complaint is that R4L would add undue load to existing maintainers (at least that's about the only coherent technical thing I've gathered from Christoph's emails). What also adds undue load to existing maintainers is causing their peers to quit. Hector Martin is a talented individual and the loss of him will surely be felt.

marhee a day ago

> The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C .... Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes.

What's the reach here of linters/address san/valgrind?

Or a linter written specifically for the linux kernel? Require (error-path) tests? It feels excessive to plug another language if these are the main arguments? Are there any other arguments for using Rust?

And even without any extra tools to guard against common mistakes, how much effort is solving those bug fixes anyway? Is it an order of magnitude larger than the cognitive load of learning a (not so easy!) language and context-switching continuously between them?

  • raverbashing a day ago

    You can't valgrind kernel space

    Linters might be helpful, but I don't remember there being good free ones

    The problem here is simple: C is "too simple" for its own good and it puts undue cognitive burden on developers

    And those who reply with "skill issue" are the first to lose a finger on it

    • marhee a day ago

      > You can't valgrind kernel space > Linters might be helpful, but I don't remember there being good free ones

      I should have Googled:

      https://www.kernel.org/doc/html/latest/dev-tools/

      So many tools here. Hard to believe these cannot come close to what Rust provides (if you put in the effort).

      • chippiewill a day ago

        Rust forces you to encode much more explicitly information about lifetimes and ownership into the code. Tools can only do so much without additional information.

      • Avamander a day ago

        Those solve a bunch of problems and can avoid a lot of issues if used properly, but it's time-consuming and cumbersome to say the least. Nowhere near as seamless or holistic as what Rust has to offer.

        • marhee a day ago

          > but it's time-consuming and cumbersome to say the least

          Only when writing code (and not even that: only when doing final or intermediate checks on written code). When reading the code you don't have to use the tools. Code is read at lot more then being written. So if tools are used, the burden is put only on the writer of the code. If Rust is used the burden of learning rust is put both on the writers and readers of the code.

          • estebank a day ago

            The same information that is useful for compilers (like rustc) and linters to reason about code, is also useful for humans to reason about code.

          • Filligree a day ago

            I find reading Rust much easier than writing it. It’s not actually that complicated a language; the complexity is in solving for its lifetime rules, which you only do when coding.

sim7c00 a day ago

It's really all opinions what is better or worse, but i do respect the sentiment that there is some boundary, and on one side of the boundary, Rust makes a lot of sense, and the other side, Rust does not work at all. (managing global mutable resources). It weirds me out a bit there is even such discussions going on in projects like this. It seems obvious and proven at this point and if not that then atleast it should be obvious already for a long time that if you program within some large codebase or ecosystem, you are not the only voice, and you need to learn to collaborate with people with different views as you and make it work.

I really don't like rust, hence instead of wanting to contribute to projects which will inadvertently lead to more and more rust code being brought in, i start my own projects, when i can be the only voice of reason and have my joys of making things segfault :>... Its quite simple. If like me you are stuobborn and unflexible, you are a lone wolf. accept it and move on to be happy :) rather than trying to piss against the wind of change.

  • ddtaylor a day ago

    That's true. I often want to just make something cool and I don't want someone turning it into a research project basically because they like compilers.

    • tormeh 18 hours ago

      > Research project

      As if everything created after 1979 is a research project.

WalterBright 2 days ago

These days, the bugs I generate in my own code are rarely programming errors. They're misunderstandings of the problem I am trying to solve, or misunderstandings of how to fit it into the rest of the (very complex) code.

For example, I cannot even recall the last time I had a double-free bug, though I used to do it often enough.

The emphasis for me is on a language that makes it easy to express algorithms.

  • jcranmer a day ago

    > For example, I cannot even recall the last time I had a double-free bug

    Honestly, it's not the double-frees I worry about, since even in a language like C where you have no aids to avoid it, the natural structure of programs tends to give good guidance on who is supposed to free an object (and if it's unclear, risking a memory leak is the safer alternative).

    It's the use-after-free I worry about, because this can come about when you have a data structure that hands out a pointer to an element that becomes invalid by some concurrent but unrelated modification to that data structure. That's where having the compiler bonk me on the head for my stupidity is really useful.

  • yazaddaruvala a day ago

    +1 I’ve really enjoyed using more declarative languages in recent years.

    At work I’ve been helping push “use SQL with the best practices we learned from C++ and Java development” and it’s been working well.

    It’s identical to your point. We no longer need to care about pointers. We need to care about defining the algorithms and parallel processing (multi-threaded and/or multi-node).

    Fun fact: even porting optimized C++ to SQL has resulted in performance improvements.

wewewedxfgdf 2 days ago

Community and people are the main issue.

If the people who work on the kernel now don't like that direction then that's a big problem.

The Linux leadership don't seem very focused on the people issues.

Where is the evidence that there is buy in from the actual people doing kernel development now?

Or is it just Linus and Greg as commanders saying "thou shalt".

  • dralley 2 days ago

    Plenty of Linux maintainers are either fully or partially on board with using Rust in drivers. Don't overindex on the opinions of two or three of them that are vocally opposed / skeptical.

    Christian is a special case because his subsystem (DMA) is essentially required for the vast majority of useful device drivers that one might want to write. Whereas other subsystems are allowed to go at their own pace, being completely blocked on DMA access by the veto of one salty maintainer would effectively doom the whole R4L project. So whereas normally Linus would be more willing to avoid stepping on any maintainer's toes, he kind of has to here.

    • EasyMark a day ago

      I guess I simply don't understand why he's biased against rust folks using his API as long as they aren't mucking about on his lawn. Why does he care? If the API and calling conventions are adhered to it makes absolutely no difference to him or the hardware that it's running on. I don't understand his objections. If I write a c library or network service, I don't care if the person using it is using rust, c, ada, or cobol...

      • chippiewill a day ago

        To play devil's advocate:

        1. He has a philosophical objection to a multi-lingual kernel, because it adds complexity, and it's not unreasonable to expect that to spread. 2. It's fair enough to say it doesn't impact him now. But realistically if Rust is a success and goes beyond an experiment then at some point (e.g. in a decade) it will become untenable for subsystem maintainers to break the rust bindings with changes and let someone else fix them before releases. I fully expect that there will be very important drivers written in Rust in the future and it will be too disruptive to have the Rust build break on a regular basis just because Hellwig doesn't want to deal with it every time the DMA APIs are changed.

        So unsurprisingly Hellwig is reacting now, at the point when he can exert the most control to avoid being forced to either accept working on doing some Rust himself or be forced to step aside and let someone else do it.

        However this isn't realistically good enough. Linus already called the play when he merged the initial Rust stuff, the experiment gets to go on. The time to disagree and commit was back then.

  • mustache_kimono 2 days ago

    > Where is the evidence that there is buy in from the actual people doing kernel development now?

    Are the people doing the work not good enough? See the maintainers list -- Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich, etc., etc...

    Who else exactly do you want to buy in?

    > If the people who work on the kernel now don't like that direction then that's a big problem.

    I think if you really want to lead/fight a counter-revolution, it will come down to effort. If you don't like Rust for Linux (for what could be a completely legitimate reason), then you need to show how it is wrongheaded.

    Like -- reverse engineer an M1 GPU or some other driver, and show how it can be done better with existing tooling.

    What I don't think you get to do is wait and do nothing and complain.

  • bdhcuidbebe 2 days ago

    Another ”people perspective” point is the aging demograph of the kernel devs and the need to engage a new generation of devs. Betting on a modern language like rust might just be what’s needed on that note. And, according to Torvalds they have the folks willing to do the work today.

    • surajrmal a day ago

      Is that a job for kernel folks to address or companies who hire people to work on the Linux kernel?

  • biorach a day ago

    > Where is the evidence that there is buy in from the actual people doing kernel development now?

    https://lwn.net/Articles/1007921/

    > To crudely summarize: the majority of responses thought that the inclusion of Rust in the Linux kernel was a good thing; the vast majority thought that it was inevitable at this point, whether or not they approved.

MalseMattie 3 days ago

This statement was sorely needed for this discussion to move forward. Hopefully the last section fills the needed parties with resolve

infogulch 3 days ago

The actual project is "lets modernize the internal kernel api surface", and "how tolerable is it to write against this api in rust" is just the best metric at hand to measure the progress.

This is the correct frame for RFL proponents. You're welcome.

lousken 3 days ago

I wonder how Microsoft implements rust in their kernel.

As for this issue, it's just a nature of any project, people will come and go regardless, so why not let those C developers leave and keep the rust folks instead? At some point you have to steer the ship and there will always be a group of people unhappy about the course

  • jeroenhd a day ago

    From what I can tell, Microsoft seems to have the advantage that a lot of in-kernel interfaces are documented and relatively stable. Linux guarantees that the userland APIs don't change, but when a kernel component changes you're out of luck. Windows seems much more focused on internal consistency and stability. Probably in part because a lot of proprietary software uses a lot of internal APIs not meant for public consumption and there's nothing Microsoft can do to stop that, really.

    In a way, these Rust bindings are somewhat stabilizing the Linux API as well, by putting more expectations and implications from documentation into compiler-validated code. However, this does imply certain changes are sure to break any Rust driver code one might encounter, and if may take Rust devs a while to redesign the interfaces to maintain compatibility. It's hardly a full replacement for a stable API.

    At the moment, there aren't enough Rust developers to take over kernel maintenance. Those Rust developers would also need to accept giant code trees from companies updating their drivers, so you need experts in both.

    With the increasing amount of criticism languages like C are receiving online because we now have plain better tooling, I think the amount of new C developers will diminish over the coming years, but it still may take decades for the balance to shift.

  • EasyMark a day ago

    or they can be adults and work it out. Sometimes you just ahve to put the kids in different sandboxes and keep them apart, that's why we have APIs and calling conventions.

  • bigfatkitten 2 days ago

    Alternatively, there's nothing preventing the Rust folks building their own kernel from the ground up.

    • dralley 2 days ago

      There are multiple kernels written in Rust already. Writing another one wouldn't be interesting.

      The point of R4L is that people want to write drivers for Linux in Rust. The corporate sponsors that are involved also are interested in writing drivers for Linux in Rust. Sure, Google could rebase Android on top of RedoxOS or Fuschia and Red Hat could spend a decade writing Linux Subsystem for RedoxOS, but neither want to do those things. They want to write drivers, for Linux, in Rust.

      Telling them to write a new kernel is a bit like telling them they should go write a new package manager. It's a completely different thing from what they actually care about.

      • patrick451 2 days ago

        [flagged]

        • dralley 2 days ago

          Both Linus and Greg KH were actively supportive of the project and remain so. Several of the R4L developers were long-term linux devs long before the project started (e.g. David Arlie). There are lots of current maintainers who aren't directly involved with R4L that still have a positive and optimistic outlook about it long-term. Just because there are a handful of maintainers are vocally in opposition does not mean that is the representative opinion.

          This is such an absurd, content-free argument, which is not surprising given how you closed it.

        • tcfhgj 2 days ago

          That's a good reputation to have, for most people

        • vlakreeh a day ago

          Ah yes, woke, the word used to describe something disliked.

          That's a misrepresentation of what's actually going on in the R4L project. Volunteers are enabling support for it within the kernel to allow for rust drivers in a way that explicitly does not require existing maintainers to change how they maintain their parts of the kernel. Maintaining rust support and the APIs consumed by Rust is the job of R4L and doesn't require any work from the existing maintainers who are allowed to make changes to their C that breaks Rust where the Rust will then be adjusted accordingly.

    • baq a day ago

      The kernel is not a problem. Drivers are. If it wasn’t for drivers we’d all be rolling our own custom kernels.

kelnos a day ago

It's really disappointing to me to see a lot of the negative reactions and comments here. I know it's popular and in vogue now to hate on Rust, but:

Influential people who have worked on the ins and outs of the Linux kernel for years and decades believe that adopting Rust (or at least keeping the Rust experiment going) is worth the pain it will cause.

That's really all that matters. I see people commenting here about how they think RAII isn't suitable for kernel code, or how keeping C and Rust interfaces in sync will slow down important refactoring and changes, or how they think it's unacceptable that some random tiny-usage architecture that Rust/LLVM doesn't support will be left behind, or... whatever.

So what! I'm not a Linux kernel developer or maintainer, and I suspect most (if not all) of the people griping here aren't either. What does it matter to you if Linux adopts Rust? Your life will not be impacted in any way. All that matters is what the maintainers think. They think this is a worthwhile way to spend their time. The people putting in the work get to decide.

p_ing 3 days ago

> the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible

What is he referring to?

  • jcranmer 3 days ago

    I can think of several issues with the C++ committee that people can reasonably point to (some of them mutually contradictory even!), but I have no idea which of them is being referred to. It's possible he's referring to profiles, which is one of those cases where there's mutually contradictory criticisms that can be leveled against it so I have no idea in that case if he thinks they're a good or a bad thing.

    Personally, the biggest issue that gives me fear for C++'s future is that the committee seems to have more or less stopped listening to implementer feedback and concerns.

  • saagarjha 2 days ago

    Presumably the endless documents they keep coming out with explaining how profiles will solve memory safety Or whatever

  • jodrellblank 2 days ago

    https://izzys.casa/2024/11/on-safe-cxx/ is a long and opinionated drama and swear-filled read on the topic. snips from it:

    > "many people reading this might be familiar with the addition of the very powerful #embed preprocessor directive that was added to C. This is literally years of work brought about by one person, and that is JeanHeyd Meneide. JeanHeyd is a good friend and also the current editor of the C standard. And #embed started off as the std::embed proposal. Man, if only everyone in the world knew what the C++ committee did to fucking shut that shit down..."

    > ... "Herb [Sutter] ... spun up a Study Group, SG15, at the recommendation of GDR to handling “tooling” in the C++ ecosystem. This of course, paved the way for modules to get absolutely fucking steamrolled into the standard while allowing SG15 to act as a buffer preventing any change to modules lest they be devoid of Bjarne [Stroustrup] and Gaby [Gabriel Dos Reis]’s vision. Every single paper that came out of SG15 during this time was completely ignored."

    > "Gaby [Gabriel Dos Reis] is effectively Bjarne’s protégé. ... when it came to modules Gaby had to “prove himself” by getting modules into the language. Usually, the standard requires some kind of proof of implementation. This is because of the absolute disaster that was export template, a feature that no compiler that could generate code ever implemented. Thus, proof of modules workability needed to be given. Here’s where I bring in my personal conspiracy theory. The only instance of modules being used prior to their inclusion in the standard was a single email to the C++ mailing lists (please recall the amount of work the committee demanded from JeanHeyd for std::embed) where Gaby claimed that the Microsoft Edge team was using the C++ Modules TS via a small script that ran NMake and was “solving their problem perfectly”." ... the face she made when I asked [a Microsoft Employee] about Gaby’s statement signaled to me that the team was not happy. Shortly after modules were confirmed for C++20, the Microsoft Edge team announced they were throwing their entire codebase into the goddamn garbage and just forking Chromium... Gaby Dos Reis fucking lied, but at least Bjarne got what he wanted. ... This isn’t the first time Gaby has lied regarding modules, obviously...."

    > ... "This [different] paper is just frankly insulting to anyone who has done the work to make safer C++ syntax, going on to call (or at least allude to) Sean Baxter’s proposal an “ad hoc collection of features”. Yet another case of Gaby’s vagueries where he can feign ignorance. As if profiles themselves are not ad hoc attributes, that have the exact same problem that Bjarne and others argue against, specifically that of the virality of features. The C++ committee has had 8 years (8 long fucking years) to worry about memory safety in C++, and they’ve ignored it. Sean Baxter’s implementation for both lifetime and concurrency safety tracking has been done entirely in his Circle compiler [which] is a clean room, from the ground up, implementation of a C++ compiler. If you can name anyone who has written a standards conforming C++ compiler frontend and parser and then added metaprogramming and Rust’s lifetime annotation features to it, I will not believe you until you show them to me. Baxter’s proposal, P3390 for Safe C++ has a very large run down on the various features available to us..."

    > "Bjarne has been going off the wall for a while now regarding memory safety. Personally I think NASA moving to Rust hurt him the most. He loves to show that image of the Mars rover in his talks. One of the earliest outbursts he’s had regarding memory safety is a very common thing I’ve seen which is getting very mad that the definition a group is using is not the definition he would use and therefore the whole thing is a goddamn waste of time."

    > "You can also look at how Bjarne and others talk about Rust despite clearly having never used it. And in specifically in Bjarne’s case he hasn’t even used anything outside of Visual Studio! It’s all he uses. He doesn’t even know what a good package manager would look like, because he doesn’t fucking care. He doesn’t care about how asinine of an experience that wrangling dependencies feels like, because he doesn’t have to. He has never written any actual production code. It is all research code at best, it is all C++, he does not know any other language."

    > "Orson Scott Card didn't write Ender's Game [link] -> Ender's Game is an apologia for Hitler"

    > "this isn’t a one off situation. It isn’t simply just Bjarne who does this. John Lakos of Bloomberg has also done this historically, getting caught recording conversations during the closing plenary meeting for the Kona 2019 meeting because he didn’t get his way with contracts. Ville is another, historically insulting members and contributors alike (at one point suggesting that the response to a rejected paper should be “fuck you, and your proposal”), and I’m sure there are others, but I’m not about to run down a list of names and start diagnosing people like I’m a prominent tumblr or deviantart user in 2017."

    > "the new proposed (but not yet approved) Boost website. This is located at boost.io and I’m not going to turn that into a clickable link, and that’s because this proposed website brings with it a new logo. This logo features a Nazi dog whistle. The Nazi SS lightning bolts. Here’s a side by side of the image with and without the bolts being drawn over (Please recall that Jon Kalb, who went out of his way to initially defend Arthur O’Dwyer, serves on the C++ Alliance Board)."

    > "Arthur O’Dwyer has learnt to keeps his hands to himself, he does not pay attention to or notice boundaries and really only focuses on his personal agenda. To quote a DM sent to me by a C++ community member about Arthur’s behavior “We are all NPCs to him”. He certainly doesn’t give a shit. He’s been creating sockpuppets, and using proxies to get his changes into the LLVM and Clang project. Very normal behavior by the way."

    > "This is the state C++ is in, though as I’ve said plenty of times in this post, don’t get it twisted. Bjarne ain’t no Lord of Cinder. We’re stuck in a cycle of people joining the committee to try to improve the language, burning out and leaving, or staying and becoming part of the cycle of people who burn out the ones who leave."

    • TinkersW a day ago

      It is unfortunate that it is written in such a unhinged way as there are probably some valid points mixed in with the insanity..

    • bowsamic a day ago

      I feel like this one rant has done untold damage to the credibility of those who have some reason to criticise C++

    • LAC-Tech a day ago

      I have no dogs in C++ internal politics, I haven't written C++ for years.

      But the author of that post clearly has some very fairly serious mental problems.

      • 112233 a day ago

        I do not know what prompted you to write this remark, but I can assure that working long time on a C++ codebase and trying to keep up with the changes in the language can indeed result in a lasting mental damage.

        • LAC-Tech a day ago

          Mainly the Beautiful Mind stuff about the Boost logo being a dog whistle for the SS, and also "Cosmopolitan" clearly being a reference to Stalin's "Rootless Cosmopolitanism".

          Also it's it's very disjointed, long, and incoherent. Classic schizo post.

          • 112233 a day ago

            That logo thing is super fishy. According to https://lists.boost.org/Archives/boost/2024/07/257143.php It cost $12000 to design professionally, yet it was dropped from the site after staying there only two months.

            If you look at the USPTO bw image, it immedeately invokes Schutzstaffel if you ever have seen their insignia, and only later you kinda maybe see it is a "B"

            For a paid professional logo design, not being aware of, like, one of the most widely known evil logos after swastika, I mean, ok.

            Plus the whole "we want to own your logo trademark please" with regards to an opensource project, what even actually is going on there

            • LAC-Tech a day ago

              So the conspiracy is they paid 12,000 for a temporary logo that had a hidden SS insgnia, for 2 months?

              If you were a neo-nazi, do you think that's how you'd spend $12,000? Like is that the best bang for your buck, to maybe catch all those impressionable young men browsing the C++ boost library website, and subliminally bring them to your cause with your dashing B logo with a hidden mangled half of the SS insignia? Your local neo-nazi group would find a new treasurer immediately if you pulled a stunt like that!

              Anyway, if this shocks you, wait until you find it out the windows logo has a hidden swastika.

              • estebank a day ago

                The thing about dogwhistles is that they are designed to communicate to like-minded people in a way that can be explained awatly while also making anyone that is attuned to them to sound crazy if they try to point them out. Remember the white-power/ok symbol? The milk emoji? Pepe the frog? Marble statues?

                • LAC-Tech 20 hours ago

                  I can tell you right now that, based on what you're saying, you'd almost definitely consider me to be a Nazi.

                  I don't consider myself to be a Nazi, I'm nowhere near the historical defintion of a nazi, or even it's modern reinterpretation. But I am 100% sure that given maybe 2 or 3 more messages, you'll call me one.

                  So we can end it here. I've outed myself, through various "dog whistles", that I am in fact a "nazi". And therefore there's no need to reply to me.

                  I accept being put on a list. Real name is in my profile.

                  • estebank 15 hours ago

                    > I can tell you right now that, based on what you're saying, you'd almost definitely consider me to be a Nazi.

                    You're reading an awful lot into what I wrote.

                    > But I am 100% sure that given maybe 2 or 3 more messages, you'll call me one.

                    > I've outed myself, through various "dog whistles", that I am in fact a "nazi". And therefore there's no need to reply to me.

                    I don't even know what to answer to this. I have no idea where this is coming from.

                    > I accept being put on a list. Real name is in my profile.

                    Who's putting you on what list?

                    I honestly have no idea what this reply has to do with anything, unless you are arguing that dog whistles don't exist? If so, there are a few "modern Nazis" that disagree with you.

                    • LAC-Tech 14 hours ago

                      Maybe I can be more clear

                      - marble statues are cool

                      - pepe the frog is right coded, but has been used in all sorts of contexts

                      - milk emoji.. I don't even get this one. wasn't it a software developer meme on X?

                      - It's 100% OK to be white, and I will die in this hill. Anyone who tells me how I was born is not OK can die in a fire.

                      We're now 2 replies deep. Would you just like to denounce me as a "nazi" and get this over with? We both know that's where this is going.

              • 112233 a day ago

                I am sorry if my "something fishy" didn't come across clearly enough. It does not have to be a nazi conspiracy for something to make no sense and look really suspicious.

                I mean, hey, I paid for and registered all rights to this emblem, would you so kindly put it on every thing you are making? I promise not to charge you for it.

                Whatever is that even? Were they dying because of lack of registered logo?

                And no, first thing that comes to mind when seeing windows logo is not "wow nazi symbol".

                As I tried saying previously, this logo thing looks supremely fishy.

                To imply that the only motive behind it all was putting SS insignia up is taking argument to an absurd extreme. Do you argue that one's ideology cannot influence such things as logo design, unless it is the sole purpose behind it?

          • adamrezich a day ago

            That's why I kind of appreciate it though—I miss when this style of posting was more commonplace. But it's disingenuous to pretend that it's not extremely unhinged through and through.

      • buzzardbait 19 hours ago

        My initial reaction to this comment: "Wow, what a judgmental anonymous keyboard warrior. It couldn't possibly be that bad." (clicks the link)

        My reaction 2 minutes later: "Oh..."

vs4vijay 21 hours ago

This might be a silly question, but why don't we have something like PR Gate pipelines that ensures it passes before being picked up by a maintainer?

npalli 2 days ago

Prediction 2030: Linus Retires and C++ accepted as the primary language for writing the kernel.

Inadvertently, Rust makes working with C++ acceptable.

  • fooker a day ago

    You might be onto something.

    Android already uses a hardware abstraction layer for Linux written in C++ to write drivers.

    It's a matter of politics to get something like this into the kernel.

chris_wot 3 days ago

"Rust also gives us the ability to define our in-kernel apis in ways that make them almost impossible to get wrong when using them. We have way too many difficult/tricky apis that require way too much maintainer review just to "ensure that you got this right" that is a combination of both how our apis have evolved over the years"

Funny, that's not Theodore T'so's position. The Rust guys tried to ask about interface semantics and he yelled at them:

https://www.youtube.com/watch?v=WiPp9YEBV0Q&t=1529s

  • tptacek 2 days ago

    I watched like 2 minutes of this and I don't understand what this is supposed to be saying about the current debate. There's a guy lecturing the audience about how there are 30 filesystems in the kernel and not all of them are going to be instantaneously converted to Rust. But gregkh and kees aren't suggesting that any of them be converted to Rust!

    • dralley 2 days ago

      It's only relevant to the current debate in the sense that that event was the trigger for Wedson (the first and OG R4L project contributor) to quit, which was only a few months ago, so it's a fresh wound marinating in the background while essentially the same drama unfolds all over again.

daft_pink 2 days ago

It’s hard. Most people agree it should have memory safety, but also I’m not looking to become a full scale maintainer either.

darksaints 3 days ago

I've been using Linux since 2005, and I've loved it in almost every circumstance. But the drama over the last couple of years surrounding Rust in the kernel has really soured me on it, and I'm now very pessimistic about its future. But I think beyond the emotional outbursts of various personalities, I don't think that the problem is which side is "right". Both sides have extremely valid points. I don't think the problem is actually solvable, because managing a 40M+ SLoC codebase is barely tenable in general, and super duper untenable for something that we rely on for security while running in ring 0.

My best hope is for replacement. I think we've finally hit the ceiling of where monolithic kernels can take us. The Linux kernel will continue to make extremely slow progress while it deals with internal politics fighting against an architecture that can only get bigger and less secure over time.

But what could be the replacement? There's a handful of fairly mature microkernels out there, each with extremely immature userspaces. There doesn't seem to be any concerted efforts behind any of them. I have a lot of hope for SeL4, but progress there seems to be slow mostly because the security model has poor ergonomics. I'd love to see some sort of breakout here.

  • dralley 2 days ago

    Like 75% of those lines of code are in drivers or architecture-specific code (code that only runs for x86 or ARM or SPARC or POWER etc.)

    The amount of kernel code actually executing on any given machine at any given point in time is more likely to be around 9-12 million lines than anywhere near 40 million.

    And a replacement kernel won't eliminate the need for hardware drivers for a very wide range of hardware. Again, that's where the line count ramps up.not

    • darksaints 21 hours ago

      Yes, of course. But apart from the (current) disadvantage that those drivers don't exist yet, those are all positives in favor of microkernel architectures. All of the massive SLOC codebases run in usermode and with full process isolation, require no specific language compatibility and can be written in any language, do not require upstreaming, and do not require extensive security evaluations from highly capable maintainers who have their focus scattered across 40m lines of code.

    • scns a day ago

      The ADMgpu driver alone was over 5 million loc in 2023.

      • samus 21 hours ago

        Most of these are header files. I suspect most of its contents are constants and blobs autogenerated with some tool by AMD.

  • colonial 2 days ago

    Not a kernel guy, but - what's stopping a microkernel from emulating the Linux userspace? I know Microsoft had some success implementing the Linux ABI with WSL v1.0.

    I suppose the main objection to that is accepting some degree of lock-in with the existing userspace (systemd, FHS...) over exploring new ideas for userspace at the same time.

    • anp 2 days ago

      FWIW Fuchsia has a not-quite-a-microkernel and has been building a Linux binary compatibility layer: https://fuchsia.dev/fuchsia-src/concepts/starnix?hl=en.

      (disclaimer: I work on Fuchsia, on Starnix specifically)

      EDIT: for extra HN karma and related to the topic of the posted email thread, Starnix (Fuchsia's Linux compat layer) is written in Rust. It does run on top of a kernel written in C++ but Zircon is much smaller than Linux.

      • nechuchelo 2 days ago

        Nice to hear fuchsia is still being worked on. I was a bit concerned given there were no new changelogs published for half a year.

      • zamadatix 2 days ago

        Fascinating area to work in! I've had a few curiosity things come to mind before:

        What's the driving use case for Starnix? Well, obviously "run Linux apps on Fuchsia" like the RFC for it says... but "very specific apps as part of a specific use case which might be timeboxed" or "any app for the foreseeable future"?

        How complete in app support do you currently consider it compared to something like WSL1?

        What are your thoughts about why WSL2 went the opposite direction?

        Thanks!

        • anp 2 days ago

          > Fascinating area to work in!

          I agree! Lots of fun stuff to do.

          > What's the driving use case for Starnix?

          The Starnix code is open source like the rest of Fuchsia and anyone is obviously free to read it and form their own opinions about where it's useful or where it's headed, but as a mere corporate employee I can't comment on direction/strategy :(.

          > How complete in app support do you currently consider it compared to something like WSL1?

          I'm only familiar with WSL1 as an occasional user so I can't really say for sure.

          We run (and pass) a lot of tests compiled for Linux from the Linux Test Project, gVisor's compatibility test suite, and some other sources. There are still a lot of those tests that we don't yet pass :).

          > What are your thoughts about why WSL2 went the opposite direction?

          I don't know much about the history there. I've heard Nth-hand rumors that MS had a product strategy shift from Windows Phone Android compat (a relatively focused use case where edge cases might be acceptable) to trying to court developers (a broad use case where varying from their deployment environment might cause problems). I have no idea whether those rumors are accurate.

          I've also heard that it was hard to make Linux programs perform well on top of NTFS, and that virtualized ext4 actually worked better for Linux workloads where fs performance mattered at all. Something something dirent cache for stat()? Some of this is discussed on the WSL1 vs WSL2 web page[0].

          [0] https://learn.microsoft.com/en-us/windows/wsl/compare-versio...

        • mistercheph 2 days ago

          My money's on fuchsia developing as a potential android replacement, or at least replacement for the linux kernel, keeping the android userspace. Maybe something something chromeos unified computer phone tablet experience.

  • cardanome a day ago

    The rust drama is completely overblown considering rust is still years away from being a viable replacement. Sure it makes sense to start experimenting and maybe write a few drivers in rust but many features are still only available in nightly rust.

    I suspect many rust devs tend to be on the younger side, while the old C guard sees Linux development in terms of decades. Change takes time.

    Monolithic kernels are fine. The higher complexity and worse performance of a microkernel design are mostly not worth the theoretical architectural advantages.

    If you wanted to get out of the current local optimum you would have to think outside of the unix design.

    The main treat for Linux is the Linux Foundation that is controlled by big tech monopolists like Microsoft and only spends only a small fraction on actual Kernel development. It is embrace, extend, extinguish all over but people think Microsoft are the good guys now.

    • chippiewill a day ago

      > but many features are still only available in nightly rust.

      Nope. The features are all in stable releases (Since last Spring in fact). However some of the features are still marked as unstable/experimental and have to be opted-in (so could in theory have breaking changes still). They're entirely features that are specific to kernel development and are only needed in the rust bindings layer to provide safe abstractions in a kernel environment.

  • bigfatkitten 3 days ago

    > I have a lot of hope for SeL4, but progress there seems to be slow mostly because the security model has poor ergonomics.

    seL4 has its place, but that place is not as a Linux replacement.

    Modern general purpose computers (both their hardware, and their userspace ecosystems) have too much unverifiable complexity for a formally verified microkernel to be really worthwhile.

    • yencabulator 2 days ago

      Oh don't worry, seL4 isn't formally proven on any multicore computer anyway.

      And the seL4 core architecture is fundamentally "one single big lock" and won't scale at all to modern machines. The intended design is that each core runs its own kernel with no coordination (multikernel a la Barrelfish) -- none of which is implemented.

      So as far as any computer with >4 cores is concerned, seL4 is not relevant at this time, and if you wish for that to happen your choice is really either funding the seL4 people or getting someone else to make a different microkernel (with hopefully a lot less CAmkES "all the world is C" mess).

      • vacuity 2 days ago

        Barrelfish! My dream project is developing a multikernel with seL4's focus on assurance. I want to go even further than seL4's minimalism, particularly with regards to the scheduler. I thiiiiink it doesn't have to be bad for performance. But I've not materialized anything and so I am just delusional. And yes, I am thinking of doing it in Rust. For all of Rust's shortcomings, especially for kernel development, I think it has a lot of promise. I also have the already-loves-Rust cognitive bias. Not trying to somehow achieve seL4's massive verification effort. (Will gasp AI faciliate it? Not likely.) I am sad that Barrelfish hasn't gotten more attention. We need more OS research.

        • yencabulator 2 days ago

          You didn't mention capabilities, otherwise very much yes. Though I have to say I think I'd vote EROS or Theseus harder than Barrelfish; Barrelfish is "just" a multikernel. All I've materialized is a bunch of notes and bookmarks.

          Kani et al are very interesting, but can't handle large codebases yet. I'm trying to write Rust in very compartmentalized, sans-IO etc, way to have small libraries that are fuzzable and more amenable to Kani verification.

          • vacuity a day ago

            Capabilities are implied; seL4 did it (everyone cool does it) and Barrelfish designed their capability system off of seL4's. When I say multikernel, that is the main innovation of Barrelfish to take; the kernel design otherwise is more like seL4 or perhaps EROS. Barrelfish also has some interesting takes on other OS services that I want to use, but that's not kernel design. I assume Theseus is [0]. It requires the whole system to buy in to one language and model, so I consider it untenable for a general-purpose OS. I think they're planning to use WASM to increase flexibility, but eh.

            > I'm trying to write Rust in very compartmentalized, sans-IO etc, way to have small libraries that are fuzzable and more amenable to Kani verification.

            Good design even if Kani isn't used in the end.

            [0] https://www.usenix.org/conference/osdi20/presentation/boos

            • yencabulator a day ago

              Yeah, also https://github.com/theseus-os/Theseus

              One could also run virtual machines for end user workloads under a Theseus design. (The other meaning, not bytecode interpreter.) That sounds like a nice way to real world applicability, to me. History has shown reimplementing Linux syscalls is not realistic (gVisor, WSL1).

    • darksaints 2 days ago

      I agree that SeL4 won't replace Linux anytime soon, but I beg to differ on the benefits of a microkernel, formally verified or not.

      Any ordinary well-designed microkernel gives you a huge benefit: process isolation of core services and drivers. That means that even in the case of an insecure and unverified driver, you still have reasonable expectations of security. There was an analysis of Linux CVE's a while back and the vast majority of critical Linux CVEs to that date would either be eliminated or mitigated below critical level just by using a basic microkernel architecture (not even a verified microkernel). Only 4% would have remained critical.

      https://microkerneldude.org/2018/08/23/microkernels-really-d...

      The benefit of a verified microkernel like SeL4 is merely an incremental one over a basic microkernel like L4, capable of capturing that last 4% and further mitigating others. You get more reliable guarantees regarding process isolation, but architecturally it's not much different from L4. There's a little bit of clunkiness for writing userpace drivers for SeL4 that you wouldn't have for L4. That's what the LionsOS project is aiming to fix.

      • zozbot234 2 days ago

        Process isolation of drivers is just not very useful when the driver is interfacing with a device that has full access to system memory. Which is the case for many devices today unless you use IOMMU to prevent this.

        • darksaints 2 days ago

          The SeL4 microkernel specification assumes the use of a memory management unit, and is required by default.

          https://docs.sel4.systems/projects/sel4/frequently-asked-que...

          • comex 2 days ago

            IOMMU, not regular (CPU) MMU. The FAQ _does_ address this, but it's under "What about DMA?". In short: drivers have to be trusted for now, except that there's experimental support for x86 VT-d (which is a type of IOMMU).

            • vacuity 2 days ago

              I've been developing a notion that, in modern times, a microkernel is not the sole root of trust. It is just the most privileged component that glues other essential components. Without it, things fall apart, but we still need quality in the "business logic" components (everything else, from this view). So a user should deploy a trusted microkernel with trusted means of download and whatever opsec, and similarly for other crucial components like drivers.

              This is all essential trust anyways. The leaps and bounds we've achieved through hardware engineering have the burden that they aren't credible for security. You can use IOMMU, but perhaps I won't. Integrated co-development of hardware and software is ideal, but generally there is an adversarial relationship, and we must reflect that in the software. Trust and security are not yes/no questions. We have to keep pushing boundaries. seL4 is a good start; let's make more from it.

            • darksaints a day ago

              Ah yes, thanks for the correction.

          • nine_k 2 days ago

            Not memory management unit (MMU), but I/O memory access management unit (IOMMU). That is, can a device start a DMA transfer from/to anywhere in the physical RAM? Does this access have to pass through virtual address translation? For stuff like GPUs and even NICs the performance implications can be noticeable.

            • vacuity 2 days ago

              Yeah, not a big fan either. I also saw some suggestion that current implementations of IOMMUs aren't highly secure. Performance is always the big opponent to security. I wrote about my thoughts on not necessarily relying on IOMMUs adjacent to this reply: https://news.ycombinator.com/item?id=43122900.

      • vacuity 2 days ago

        Your view is not espoused enough. Thank you for this comment. I'm not suggesting we just go and use seL4 myself, but it's a strong foundation that shows we don't have to be so cynical about the potential of microkernels.

    • EasyMark a day ago

      I mean why does it have to be formally verified. Seems to me like the performance tradeoff for microkernels can be worth it to have drivers and other traditional kernel layer code, that don't bring down the system and can just be restarted in case of failures. Probably not something that will work for all hardware, but I would bet the majority would be fine with it.

      • darksaints a day ago

        At this point, even an unverified kernel would be a huge step up in terms of security and reliability.

        And the performance disadvantages of a microkernel are all overblown, if not outright false [1]. Sure, you have to make twice as many syscalls as a monolithic kernel, but you can do it with much better caching behavior, due to the significantly smaller size. The SeL4 kernel is small enough to fit entirely in many modern processors' L2 cache. It's entirely possible (some chip designers have hinted as much), that with enough adoption they could prioritize having dedicated caches for the OS kernel...something that could never be possible with any monolithic kernel.

        [1] https://trustworthy.systems/publications/theses_public/23/Pa...

      • kennysoona a day ago

        > I mean why does it have to be formally verified.

        Because we can and the security advantages are worth it.

  • timeon 2 days ago

    > But what could be the replacement? There's a handful of fairly mature microkernels out there

    Redox[0] has advantage that no-one will want to rewrite it in Rust.

    [0]: https://redox-os.org/

  • raspyberr 2 days ago

    GNU Mach! GNU Mach! GNU Mach! GNU Mach! GNU Mach! GNU Mach!

chris_wot 3 days ago

"he C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time."

I'd love to know where he got this impression. The new C++ features go a long way to helping make the language easier, and safer, to use.

  • ultimaweapon a day ago

    Of course the modern C++ are safer but you can still shoot yourself in the foot. Compared to Rust you still need to think about the memory safety when writing C++ while Rust you don't need to think about it at all. The only time you need to think about the memory safety in Rust is when using unsafe keyword, which can be isolated into a dedicated function.

    Most C++ developers may don't understand what I mean. You need to proficient in Rust in order to understand it. When I was still using C++ as my primary language I have the same feeling as the other C++ developers about Rust. Once you start to comfortable with Rust you will see it is superior than C++ and you don't want to use C++ anymore.

  • chippiewill a day ago

    C++ _has_ been getting safer and safer to write. However:

    1. The dangerous footguns haven't gone away 2. There are certain safety problems that simply can't be solved in C++ unless you accept that ABI will be broken and the language won't be backwards compatible.

    Circle (https://www.circle-lang.org/site/index.html) and Carbon (https://docs.carbon-lang.dev/) were both started to address this fundamental issue that C++ can't be fully fixed and made safe like Rust without at least some breaking changes.

    This article goes into more depth: https://herecomesthemoon.net/2024/11/two-factions-of-cpp/

    In the case of the Linux kernel, a lot of the newer features that C++ has delivered aren't _that_ useful for improving safety because kernel space has special requirements which means a lot of them can't be used. I think Greg is specifically alluding to the "Safety Profiles" feature that the C++ committee looks like it will be going with to address the big safety issues that C++ hasn't yet addressed - that's not going to land any time soon and still won't be as comprehensive as Rust.

Surac a day ago

perhaps someone can point me to a link where i can get information WHY it is so hard to call C from Rust or call into Rust code from C So i do not get the talk because i do not understand the issue.

  • pornel a day ago

    It's not hard to just call C. Rust supports C ABI and there's tooling for converting between C headers and Rust interfaces.

    The challenging part is making a higher-level "safe" Rust API around the C API. Safe in the sense that it fully uses Rust's type system, lifetimes, destructors, etc. to uphold the safety guarantees that Rust gives and make it hard to misuse the API.

    But the objections about Rust in the kernel weren't really about the difficulty of writing the Rust code, but more broadly about having Rust there at all.

  • _kb a day ago

    FFI is inherently unsafe. That interfacing means wrapping the C API in a safe interface based on some set of invariants. If they don't hold, then you're in undefined behaviour territory. See https://doc.rust-lang.org/nomicon/ffi.html for a fairly in-depth rundown.

nailer 2 days ago

Pasting the entire thing so people on mobile can read (at least on iPhone readability doesn’t work here:

As someone who has seen almost EVERY kernel bugfix and security issue for the past 15+ years (well hopefully all of them end up in the stable trees, we do miss some at times when maintainers/developers forget to mark them as bugfixes), and who sees EVERY kernel CVE issued, I think I can speak on this topic.

The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)

I'm all for moving our C codebase toward making these types of problems impossible to hit, the work that Kees and Gustavo and others are doing here is wonderful and totally needed, we have 30 million lines of C code that isn't going anywhere any year soon. That's a worthy effort and is not going to stop and should not stop no matter what.

But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.

Rust also gives us the ability to define our in-kernel apis in ways that make them almost impossible to get wrong when using them. We have way too many difficult/tricky apis that require way too much maintainer review just to "ensure that you got this right" that is a combination of both how our apis have evolved over the years (how many different ways can you use a 'struct cdev' in a safe way?) and how C doesn't allow us to express apis in a way that makes them easier/safer to use. Forcing us maintainers of these apis to rethink them is a GOOD thing, as it is causing us to clean them up for EVERYONE, C users included already, making Linux better overall.

And yes, the Rust bindings look like magic to me in places, someone with very little Rust experience, but I'm willing to learn and work with the developers who have stepped up to help out here. To not want to learn and change based on new evidence (see my point about reading every kernel bug we have.)

Rust isn't a "silver bullet" that will solve all of our problems, but it sure will help in a huge number of places, so for new stuff going forward, why wouldn't we want that?

Linux is a tool that everyone else uses to solve their problems, and here we have developers that are saying "hey, our problem is that we want to write code for our hardware that just can't have all of these types of bugs automatically".

Why would we ignore that?

Yes, I understand our overworked maintainer problem (being one of these people myself), but here we have people actually doing the work!

Yes, mixed language codebases are rough, and hard to maintain, but we are kernel developers dammit, we've been maintaining and strengthening Linux for longer than anyone ever thought was going to be possible. We've turned our development model into a well-oiled engineering marvel creating something that no one else has ever been able to accomplish. Adding another language really shouldn't be a problem, we've handled much worse things in the past and we shouldn't give up now on wanting to ensure that our project succeeds for the next 20+ years. We've got to keep pushing forward when confronted with new good ideas, and embrace the people offering to join us in actually doing the work to help make sure that we all succeed together.

thanks,

greg k-h

raverbashing a day ago

> The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)

C committee, are you listening? Hello? Hello? Bueller?

(Unfortunately, if they are listening it is to make more changes on how compilers should take "creative licenses" in making developers shoot themselves in the foot)

  • TuxSH a day ago

    > error path cleanups, forgetting to check error values, and use-after-free mistakes

    C++ (ideally, C++17 or 20 to have all the boilerplate-reducing tools) allows for all of that to be made, even in a freestanding environment.

    It's just that it's not enforced (flexibility is a good thing for evergreen/personal projects, less so for corporate codebases), and that the C++ committee seems to have weird priorities from what I've read (#embed drama, modules are a failure, concepts are being forced through despite concerns etc.) and treats freestanding/embedded as a second-class citizen.

okl 2 days ago

Seems to me that everyone is focused on the technical merits, not weighing the effort of learning a new programming language/toolchain/ecosystem for the maintainers appropriately.

Mastering a new programming language to a degree that makes one a competent maintainer is nothing to sneeze at and some maintainers might be unwilling to based on personal interests/motivation, which I'd consider legitimate position.

I think its important to acknowledge that not everyone may feel comfortable talking about their lack of competence/disinterest.

  • oconnor663 2 days ago

    This is exactly the position Christoph Hellwig took in the original email chain that kicked off the current round of drama: https://lore.kernel.org/rust-for-linux/20250131075751.GA1672.... I think it's fair to say that this position is getting plenty of attention.

    • dralley 2 days ago

      The opposing view is that drivers written in Rust using effectively foolproof APIs require far less maintainer effort to review. Yes, it might be annoying for Christoph to have to document & explain the precise semantics of his APIs and let a Rust contributor know when something changes, but there is a potential savings of maintainer time down the line across dozens of different drivers.

      • jenadine 2 days ago

        > Yes, it might be annoying for Christoph to have to document & explain the precise semantics of his APIs and let a Rust contributor know when something changes,

        Don't he need to do that anyway for every user of his code?

        I guess the point is that it he is able to review the code of every driver made in C using his API, but he can't review the Rust interface himself.

        • baq a day ago

          He doesn’t want to. He’s smart enough to be able to.

  • braiamp 2 days ago

    And sadly, those are going to die out eventually, so the faster we get there, the less potentials for something breaking in a way that nobody would be able to figure it out.

    • aldanor 2 days ago

      Who will, technical merits, programming languages or maintainers?

  • KerrAvon 2 days ago

    Acknowledged, but said maintainers need to learn to cope with the relentless advance of technology. Any software engineer with a long career needs to be able to do this. New technology comes along and you have to adapt, or you become a fossil.

    It's totally fine on a personal level if you don't want to adapt, but you have to accept that it's going to limit your professional options. I'm personally pretty surly about learning modern web crap like k18s, but in my areas of expertise, I have a multi-decade career because I'm flexible with languages and tools. I expect that if AI can ever do what I do, my career will be over and my options will be limited.

    • BlackFly a day ago

      To play devils advocate, for every technology that comes along with an advancement a handful come along with broken promises. People love to make fun of Javascript for that, but the only difference there is the cadence. Senior developers know this and know that the time and energy needed to separate the wheat from the chaff is exhausting. The advancements are not relentless it is the churn which is.

      That being said, rust comes with technical advances and also with enough of a community that the non technical requirements are already met. There should be enough evidence for rational but stubborn people to accept it as a way forward

    • manmal 2 days ago

      Totally tangential, but since I just recently found this out: character-number-character, like [k8s, a16z, a11y] means that 8/16/11 characters in the middle are replaced by their count. I was wondering why kubernetes would be such a long word, when you wrote k18s. Maybe it was just a typo on your end, and this system is totally obvious.

anonnon 3 days ago

> > > > > David Howells did a patch set in 2018 (I believe) to clean up the C code in the kernel so it could be compiled with either C or C++; the patchset wasn't particularly big and mostly mechanical in nature, something that would be impossible with Rust. Even without moving away from the common subset of C and C++ we would immediately gain things like type safe linkage.

> > >

> > > That is great, but that does not give you memory safety and everyone

> > > would still need to learn C++.

> >

> > The point is that C++ is a superset of C, and we would use a subset of C++

> > that is more "C+"-style. That is, most changes would occur in header files,

> > especially early on. Since the kernel uses a lot of inlines and macros,

> > the improvements would still affect most of the existing kernel code,

> > something you simply can't do with Rust.

I have yet to see a compelling argument for allowing a completely new language with a completely different compiler and toolchain into the kernel while continuing to bar C++ entirely, when even just a restricted subset could bring safety- and maintainability-enhancing features today, such as RAII, smart pointers, overloadable functions, namespaces, and templates, and do so using the existing GCC toolchain, which supports even recent vintages of C++ (e.g., C++20) on Linux's targeted platforms.

Greg's response:

> But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.

side-steps this. Even if Rust is "better," it's much easier to address at least some of C's shortcomings with C++, and it can be done without significantly rewriting existing code, sacrificing platform support, or the incorporation of a new toolchain.

For example, as pointed out (and as Greg ignored), the kernel is replete with macros--a poor substitute for genuine generic programming that offers no type safety and the ever-present possibility for unintended side effects due to repeated evaluation of the arguments, e.g.:

#define MAX(x, y) (((x) > (y)) ? (x) : (y))

One need only be bitten by this kind of bug once to have it color your perception of C, permanently.

  • mustache_kimono 3 days ago

    > Even if Rust is "better," it's much easier to address at least some of C's shortcomings with C++

    This simply forgets all the problems C++ has as a kernel language. It's really an "adopt a subset of C++" argument, but even that has its flaws. For instance, no one wants exceptions in the Linux kernel and for good reason, and exceptions are, for better or worse, what C++ provides for error handling.

    • anonnon 3 days ago

      > It's really an "adopt a subset of C++" argument, but even that has its flaws. For instance, no one wants exceptions in the Linux kernel and for good reason

      Plenty of C++ codebases don't use exceptions at all, especially in the video game industry. Build with GCC's -fno-exceptions option.

      > and exceptions are, for better or worse, what C++ provides for error handling.

      You can use error codes instead; many libraries, especially from Google, do just that. And there are more modern approaches, like std::optional and std::expected:

      https://en.cppreference.com/w/cpp/utility/optional

      https://en.cppreference.com/w/cpp/utility/expected

      • mustache_kimono 2 days ago

        > You can use error codes instead; many libraries, especially from Google, do just that. And there are more modern approaches, like std::optional and std::expected:

        Even if we are to accept this, we'd be back to an "adopt a subset of C++" argument.

        You're right in one sense -- these are more modern approaches to errors, which were adopted in 2017 and 2023 respectively (with years for compilers to implement...). But FWIW we should note that these aren't really idiomatic C++, whereas algebraic data types is a baked in, 1.0, feature of Rust.

        So -- you really don't want to adopt C++. You want to adopt a dialect of C++ (perhaps the very abstract notion of "modern C++"). But your argument is much more like "C++ has lambdas too!" than you may care to admit. Because of course it does. C++ is the kitchen sink. And that's the problem. You may want the smaller language inside of C++ that's dying to get out, but C++'s engineering values are actually "we are the kitchen sink!". TBF Rust's values are sometimes distinct too, but I'm not sure you've really examined just how different C++'s values are from kernel C, and why the kitchen sink might be a problem for the Linux kernel.

        You say:

        > RAII, smart pointers, overloadable functions, namespaces, and templates, and do so using the existing GCC toolchain

        "Modern C++" simply doesn't solve the problem. Google has been very clear Rust + C++ codebases have worked well. But the places where it sees new vulnerabilities are mostly in new memory unsafe (read C++) code.

        See: https://security.googleblog.com/2024/09/eliminating-memory-s...

        • IcyWindows 2 days ago

          Isn't "Rust without panics" a subset of Rust?

          • mustache_kimono a day ago

            > Isn't "Rust without panics" a subset of Rust?

            I'm not sure there is much in your formulation.

            It would seem to me to be a matter of program design, and programmer discretion, rather than a "subset of the language". Re: C++, we are saying "Don't use at least these dozen features, because they don't work well at many cooks scale, and/or they combine in ways which are non-orthogonal. We don't want you to use them because they complect[0] the code." Re: no panic Rust, we are saying "Don't call panic!(), because obviously you want a different program behavior in this context." These are different things.

            [0]: https://www.youtube.com/watch?v=SxdOUGdseq4

      • 112233 2 days ago

        And -fno-exceptions, while being de-facto standard e.g. in gamedev, still is not standard C++ (just look how much STL stuff in n4950.pdf is specified as throwing, most of those required for freestanding too (16.4.2.5)).

        And you cannot just roll your own library in a standard compliant way, because it contains secret compiler juice for, e.g. initializer_list or coroutines.

        And once you use your own language dialect (with -fno-exceptions), who is to stop you from "customizing" other stuff, too?

        • anonnon a day ago

          > And -fno-exceptions, while being de-facto standard e.g. in gamedev, still is not standard C++

          So? The Linux kernel has freely relied on GCC-specific features for decades, effectively being written in "GCC C," with it only becoming buildable with Clang/LLVM in the last two years.

          >(just look how much STL stuff

          No one said you have to use the STL. Game devs often avoid it or use a substitute (like EASTL) more suitable for real-time environments.

          • 112233 a day ago

            > So? The Linux kernel has freely relied on GCC-specific features for decades

            That is unironically admirable. Either they have their man on GCC team, or have been fantastically lucky. In the same decades there have been numerous GCC extensions and quirks that have been removed [edit: from the gcc c++ compiler] once new standard proclaims them non-conformant.

            So, which C++ dialect would provide tangible benefits to a freestanding self-modifying code that is Linux kernel, without bringing enough problems to outweight it all completely?

            RAII and templates are nice, but it comes at the cost of making code multiple orders of magnitude harder to reason about. You cannot "simply" add C++ to sparse/coccinelle. And unlike rust, c++ compiler does not really care about memory bugs.

            I mean, the c++ committee introduced "start_lifetime_as", effectively declaring all existing low-level c++ programs invalid, and made lambdas that by design can capture references to local variables then be passed around. Why would you set yourself up to have rug pulled out on the next C++ revision if you are not forced to?

            C++ is a disability that can be accomodated, not something you do to yourself on purpose.

            • TuxSH a day ago

              > I mean, the c++ committee introduced "start_lifetime_as", effectively declaring all existing low-level c++ programs invalid

              Did it? Wasn't that already the case before P2590R2?

              And yes, a lot of the C++ lifetime model is insanity (https://en.cppreference.com/w/cpp/language/lifetime). Fortunately, contrary to the committee, compiler vendors are usually reasonable folks allowing needed low-level idioms (like casting integer constants to volatile ptr) and provide compiler flags whenever necessary.

              • 112233 20 hours ago

                Thank you for the correction! Indeed, the "magic malloc" part (P0593R6, a heroic effort by the way) looks to have gone in earlier into C++20. As you say, no sane compiler was affected by that change, the committee like a boss went in, saw everyone working, said "you all have our permission to continue working" and left.

    • EasyMark a day ago

      isn't that why you pick a particular subset, and exclude the rest of the language? It should be pretty easy to avoid using try/catch, especially in the kernel. A subset of C probably doesn't make much sense but for c++ which absolutely gigantic, it shouldn't be hard. Getting programmers to adhere to it could be handled 99% of the time with a linter, the other 1% can be code by reviewers.

      • mustache_kimono a day ago

        > isn't that why you pick a particular subset, and exclude the rest of the language?

        If the entire natural inclination of the language is to use exceptions, and you don't, beginning with C++17 and C++23, I'm less sure that is the just right fit some think it is.

        > Getting programmers to adhere to it could be handled 99% of the time with a linter, the other 1% can be code by reviewers.

        What is the tradeoff being offered? Additional memory safety guarantees, but less good than Rust, for a voluminous style guide to make certain you use the new language correctly?

        • anonnon 11 hours ago

          > If the entire natural inclination of the language is to use exceptions, and you don't, beginning with C++17 and C++23

          I've personally written libraries targeting C++20 that don't use exceptions. Again, error codes, and now std::optional and std::expected, are reasonable alternatives.

          > What is the tradeoff being offered? Additional memory safety guarantees, but less good than Rust, for

          It's not letting the perfect be the enemy of the good. It's not having to rewrite existing code significantly, or adopt a new toolchain, or sacrifice support for any platform Linux currently supports with a GCC backend.

  • TinkersW a day ago

    Ya it rather baffling, it would be a solid improvement, and they can easily ban the parts that don't work for them(exceptions/stl/self indulgent template wankery).

    On a memory safety scale I'd put C++ about 80% of the way from C to Rust.

  • bsder 3 days ago

    > For example, as pointed out (and as Greg ignored), the kernel is replete with macros--a poor substitute for genuine generic programming that offers no type safety and the ever-present possibility for unintended side effects

    I never thought I would say that C++ would be an improvement, but I really have to agree with that.

    Simply adopting the generic programming bits with type safety without even objects, exceptions, smart pointers, etc. would be a huge step forward and a lot less disruptive than a full step towards Rust.

    • rincebrain 3 days ago

      At this point, I think that would be a misstep.

      I'm not sure I have an informed enough opinion of the original C++ debate, but I don't think stepping to a C++ subset while also exploring Rust is a net gain on the situation, and has the same kinds of caveats as people who are upset at R4L complain about muddling the waters, while also being almost entirely new and untested if introduced now[1].

      [1] - I'm pretty sure some of the closed drivers that do the equivalent of shipping a .o and a shim layer compiled have C++ in them somewhere sometimes, but that's a rounding error in terms of complexity testing compared to the entire tree.

orf 3 days ago

Christoph Hellwig seems fun to interact with. He drive-by posts the same, repeated points and seemingly refuses to engage with any replies.

  • dang 2 days ago

    Please don't cross into personal attack in HN threads.

    I'm not saying it's never accurate*, it's just that, if you evaluate them through the site guidelines, the cost/benefit is negative.

    https://news.ycombinator.com/newsguidelines.html

    * (not a comment on this or any person)

  • rendaw 3 days ago

    AFAICT his only response in that thread:

    > Right now the rules is Linus can force you whatever he wants (it's his project obviously) and I think he needs to spell that out including the expectations for contributors very clearly.

    >

    > For myself I can and do deal with Rust itself fine, I'd love bringing the kernel into a more memory safe world, but dealing with an uncontrolled multi-language codebase is a pretty sure way to get me to spend my spare time on something else. I've heard a few other folks mumble something similar, but not everyone is quite as outspoken.

    He gets villianized and I don't think all his interactions were great, but this seems pretty reasonable and more or less in line with what other people were asking for (clearer direction from Linus).

    That said, I don't know, maybe Linus's position was already clear...

    • buttercraft 3 days ago

      Maybe, but "spreads like cancer" is not part of a well-reasoned technical discussion, but of an emotional one.

      • sph 3 days ago

        In many languages, like Italian which I am a native speaker of, to "spread like a cancer" doesn't have the negative subtext of the English idiom. It just means it spreads, wildly, uncontrollable. In English it gets muddled with the very negative idiom of "being a cancer", i.e. being very bad if not fatal.

        • indrora 3 days ago

          I think it's because in English-speaking places (I'll say "The US and some rounding errors" to be explicit) the fact is that for a long time, cancer was a death sentence. This led to anything that is hard to kill as being called cancerous and the avoidance of such things is important (yes, this is where you chuckle and mime smoking a cigarette. There's still a population of the US that believes "smoking causes cancer" is a conspiracy by Big Pharma to push more cancer treatments or some bullshit like that.)

          Calling something "cancerous" is to say it was an incurable disease that unless stamped out with some amount of precision will continue to cause rot and decay. Be it correct or not, saying "The cancer that is killing HN" is pointing a finger at a problem and scapegoating all the other problems onto it.

        • plagiarist 2 days ago

          Like how "going viral" is not really the negative connotation that one might expect?

      • TheFuzzball 3 days ago

        You're confusing language that causes a strong emotional response within you, with language that was written by a person experiencing strong emotion.

        It's colourful language for sure, but gimme a break.

        • Dylan16807 3 days ago

          Building part of an "emotional discussion" doesn't require the author to be experiencing particularly strong emotions as they write it.

          Not that you have evidence of the author's state of mind?

          I don't think the confusion you describe is happening.

        • hitekker 3 days ago

          That's a good distinction, and it pretty much captures the exchange. Both sides felt quite strongly; Helwig used strong words. But that doesn't mean either side was unreasonable, despite some of us commenters being discomforted.

stackedinserter a day ago

What's in Rust that creates drama in every project where it's used?

  • bmicraft a day ago

    The part of the old guard that's hellbent on never learning anything new again, ever.

    • stackedinserter a day ago

      Do you think that they don't want rust code in LK just because they don't want to learn "anything new again ever"?

      • bmicraft 20 hours ago

        Parent wasn't specifically talking about the kernel, but yes. The ones complaining in this case explicitly argued against it because they don't know it and don't want to learn it.

dboreham 2 days ago

I think it's becoming apparent that any attempt to progressively re-write a large codebase into a new language is always going to fail. Needs to done ground up new.

  • fulafel a day ago

    Which progressive rewrite attempts do you have in mind?

  • baq a day ago

    Ummm it’s actually exactly the other way around if the code is alive: incremental is the only possible way to keep up.

oguz-ismail a day ago

Aren't these people tired of shaving that yak already? I wish they rather focused on making one (1) decent distro for desktop use.

  • bmicraft a day ago

    Kernel developers aren't generally in the business of creating distros

cynicalsecurity 2 days ago

Rust changes every few months. It's simply not a mature language or people behind it have no idea what they are doing.

  • jenadine 2 days ago

    > Rust changes every few months.

    No it doesn't.

    Quite the contrary, great care is taken so that the language stay stable. "Stability without stagnation" is one of Rust core principles.

  • saagarjha 2 days ago

    It turns out that there are always things to improve. You can decide to ignore those improvements for 50 years too but then people generally don’t want to use your language anymore.

  • oconnor663 2 days ago

    If you haven't been maintaining any Rust code, you might have the impression that breaking changes are far more common than they really are. Rust has about as many breaking changes as Go, probably fewer? (Because Go lacks an edition mechanism.)

    • bmicraft 20 hours ago

      Which means that rust doesn't have any, does it? Since existing editions will (knock on wood) never change they should never break...

      • estebank 18 hours ago

        Yes, except for fixing soundness bugs (very rare in the past several years) and changes to the stdlib that might interact poorly with type inference and existing code (the time 0.3.5 issue, which is a change that breaks existing code, because the existing code technically was already "broken"/exercise a future compat footgun, but these should be about as unusual).

  • EasyMark a day ago

    It's not the rust of 8-10 years ago, it's quite stable as a language now, and backward compatibility is stellar.

mimd 3 days ago

Isn't this a bait and switch, that all the c kernel devs were complaining about? That it wouldn't be just drivers but also all new kernel code? The lack of candor over the goal of R4L and downplaying of other potential solutions should give any maintainer (including potential rust ones) pause.

Anyway, why just stop at rust? If we really care about safety, lets drop the act and go make everyone do formal methods. Frama-C is at least C, has a richer contract language, has heavy static analysis tools before having to go to proofs, is much more proven, and the list goes on. Or, why not add Spark to the codebase if we are okay with mixing langs in the codebase? Its very safe.

  • AlotOfReading 3 days ago

    Frama-C doesn't actually prove memory safety and has a huge proof hole due to the nature of UB. It gives weaker guarantees than Rust in many cases. It's also far more of a pain to write. The Frama-C folks have been using the kernel as a testbed for years and contributing small patches back. The analysis just doesn't scale well enough to involve other people.

    Spark doesn't have an active community willing to support its integration into the kernel and has actually been taking inspiration from Rust for access types. If you want to rustle up a community, go ahead I guess?

    • mimd a day ago

      No, it can track pointer bounds and validity across functions. It also targets identifying cases of UB via eva. Both rust and frama-C rely on assertions to low level memory functions. Rust has the same gaping UB hole in unsafe that can cross into safe.

      If we are talking about more than memory, such as what greg is talking about in encoding operational properties then no, rust is far behind both frama-C, Spark, and tons of others. They can prove functional correctness. Or do you think miri, kani, cruesot, and the rest of the FM tools for Rust are superfluous?

      My mocking was that that the kernel devs have had options for years and have ignored them out of dislike (ada and spark) or lack of effort (frama-C). That other options provide better solutions to some of their intrests. And that this is more a project exercise in getting new kernel blood than technical merits.

  • gizmondo 2 days ago

    For it to be bait and switch someone should've said "Rust will forever be only for drivers". Has anyone from the Linux leadership or R4L people done that? To my knowledge it has always been "for now".

    • mimd 2 days ago

      "But for new code / drivers..." encompasses more than just "drivers" and refers to all new code. I doubt it's a mistake either due to the way the rest of the email is written. And Greg said "no one sane ever thought that (force anyone to learn rust)" just 5 months ago (https://lkml.org/lkml/2024/8/29/312). But he is now telling his C devs they will need to learn and code rust to make new code in the kernel.

      • zozbot234 2 days ago

        > But he is now telling his C devs they will need to learn and code rust to make new code in the kernel.

        I don't think this is accurate, Rust is still totally optional. Also, the Rust folks are supposed to fix Rust code whenever it breaks due to changes on the C side. If they fail to do this, the newly-broken Rust code is supposed to be excluded from the build - up to and including not building any Rust at all.

        • mimd 2 days ago

          Yes, I know the policy that has been stated and the recent docs on it (https://rust-for-linux.com/rust-kernel-policy). A good portion of the R4L group were trying to avoid this scenario due the toxicity of such a change (especially at this early contentious point, and despite what their online advocates want). I don't think the policy will immediatly change because of his statements, but I find it pretty clear that this is where he wants the project to go.

  • oconnor663 2 days ago

    I'm no kernel dev, but I assume that DMA bindings (what this round of drama was originally all about) fall squarely into "stuff that drivers obviously need".