That's a nice trick, but contrary to function statics, it is susceptible to SIOF.
This kind of optimization is useful only on extraordinarily hot paths, so I wouldn't generally recommend it.
> On ARM, such atomic load incurs a memory barrier---a fairly expensive operation.
Not quite, it is just a load-acquire, which is almost as cheap as a normal load. And on x86 there's no difference.
One thing where both GCC and Clang seem to be quite bad at is code layout: even in the example in the article, the slow path is largely inlined. It would be much better to have just a load, a compare, and a jump to the slow path in a cold section.
In my experience, in some rare cases reimplementing the lazy initialization explicitly (especially when it's possible to use a sentinel value, thus doing a single load for both value and guard) did produce a noticeable win.
> That's a nice trick, but contrary to function statics, it is susceptible to SIOF.
For those (like me) who don’t recognize that abbreviation, “The static initialization order fiasco (ISO C++ FAQ) refers to the ambiguity in the order that objects with static storage duration in different translation units are initialized in” (https://en.cppreference.com/w/cpp/language/siof.html)
Yes, thanks for the clarification, what I probably should have said is that the trick is basically syntactic sugar to declare a scoped global static variable, and as such it inherits all the problems of global static variables.
Funny enough I recently wrote my own hack using this linker feature in C, to implement an array of static counter definitions that can be declared anywhere and then written out (e.g., to prometheus) in one place.
Note that as I later found out, this doesn't work with Mac OS's linker, so you need to use some separate incantations for Mac OS.
The rabbit hole I just went down is called C/C++ Statement Expressions [1] which are a GCC extension:
#define FAST_STATIC(T) \
*({ \
\ // statements separated by semicolons
reinterpret_cast<T *>(ph.buf); \ // the value of the macro as a statement
})
The reinterpret_cast<T*>(...) statement is a conventional C++ Expression Statement, but when enclosed in ({}), GCC considers the whole kit and kaboodle a Statement Expression that propagates a value.
There is no C equivalent, but in in C++, since C++11 you can achieve the same effect with lambdas:
auto value = [](){ return 12345; }();
As noted in the linked SO discussion, this is analogous to a JS Immediately-Invoked Function Expression (IIFE).
Lambdas are not fully equivalent since return statements in statement expressions will return at the function level, whereas return statements in lambdas will only return at the lambda level.
Just for clarity, since I didn't understand on first reading.
int foo() {
int result = ({
if (some_condition)
return -1; // This returns from foo(), not just the statement expression
42;
});
// This line might never be reached
}
>Even after the static variable has been initialised, the overhead of accessing it is still considerable: a function call to __cxa_guard_acquire(), plus atomic_load_explicit(&__b_guard, memory_order::acquire) in __cxa_guard_acquire().
No. The lock calls are only done during initialization, in case two threads run the initialization concurrently while the guard variable is 0. Once the variable is initialized, this will always be skipped by "je .L3".
Right, I was scratching my head exactly for that reason too. Even if the analysis was correct it would still be a solution for the problem that doesn't exist.
The way block scope statics are handled in C++ is a mistake. Block scope statics that don't depend on any non-static local variables should be initialized when the program starts up. E.g.:
void fun(int arg)
{
static obj foo(arg); // delayed until function called (dependency on arg)
static obj bar(); // inited at program start (no dependency on arg)
}
In other words, any static that can be inited at program startup should be, leaving only the troublesome cases that depend on run-time context.
In theory I agree, but in practice this just increases compile time by a lot, and we need to be able to manually toggle the code that gets executed at compile time.
When every eligible piece of code becomes automatically constexpr, as you suggest, the compile time will just balloon. The code still needs to be compiled and executed, just all at once now. Optimization is one of those annoying problems for which we currently don't have the compute to fully bruteforce it. We need to be selective with which code is marked constexpr.
The idea is that since "static obj bar()" doesn't depend on anything in the function, it could in principle be moved outside of the function. So in actual fact, it can be treated that way by the loading semantics of the program (can be constructed without the function having to be called), except that the name bar is only visible inside the function.
I don't understand why C++ wouldn't have specified it this way going back to 1998, but that's just me.
You use constinit. But this means constexpr constructor (easy with 2-phase init, not too much of a problem for singleton objects), and trivial destruction
`NoDestructor` just ensures that the destructor is not called on the wrapped object, but you still need to manage the lifetime. If you look at the example, its recommended usage is with a function static. In other words, it's a utility to implement leaky Meyers' singletons.
std::launder is a bit weird here. Technically it should be used every time you use placement new but access the object by casting the pointer to its storage (which NoDestructor does). However, very little code actually uses it. For example, every implementation of std::optional should use it? But when you do, it actually prevents important compiler optimizations that make std::optional a zero-cost abstraction (or it did last time I looked into this).
std::launder should probably be used more than it is in low-level code if you care about correctness, even though it doesn’t always bite you in the ass. It is a logical no-op. std::launder is a hint to the compiler to forget everything it thinks it knows about the type instance, sort of like marking it “volatile” only for a specific moment in time.
The use of std::launder should be more common than it is, I’ve seen a few bugs in optimized builds when not used, but compilers have been somewhat forgiving about not using it in places you should because it hasn’t always existed. Rigorous code should be using it instead of relying on the leniency of the compiler.
In database engine code it definitely gets used in the storage layers.
“Dynamic initialization of a block-scope variable with static storage duration or thread storage duration is performed the first time control passes through its declaration
[…]
this would initialise everything correctly: by the time foo() is called, its b has already been initialised.”
I guess that means this trick can change program behavior, especially if the function containing the static is never called in a program’s execution.
> For this we need a certain old, but little-known feature of UNIX linkers
STOP WRITING NON-PORTABLE CODE YOU BASTARDS.
The correct answer is, as always, “stop using mutable global variables you bastard”.
Signed: someone who is endlessly annoyed with people who incorrectly think Unix is the only platform their code will run on. Write standard C/C++ that doesn’t rely on obscure tricks. Your co-workers will hate you less.
Author here. This is part of SPDK-based server code. Transportability outside of UNIX (mostly Linux) is entirely irrelevant. By the way, the global variables here are immutable after the initialisation, that's the point.
> Every time someone ships successful code that's hard to port to Windows
Until your boss tells you to port your so-far Linux-only code to Windows, and you run that struggle.
Signed, someone who spent the past year or so porting Linux code to Windows and macOS because the business direction changed and the company saw what was the money-maker.
P.S. Not the parent commenter, because I just realised they, too, had a paragraph beginning with 'signed, ...'
The technical requirement of installing WSL before installing our software was already a non-starter, since most Windows users expect one installer or zip to do-it-all. WSL2 isn't (yet) like a C/C++/DirectX redistributable which can be plugged in as a dependency to a given program. Additionally our program is expected to work natively with Windows paradigms.
More critically, we work with high-performance filesystems. The performance impact of files going a round-trip through the Linux Plan9 driver, then through a Linux kernel context switch, into the kernel, and down into Hyper-V, and then up back through the Windows Plan9 driver was completely unacceptable. It was deemed worthwhile to rewrite targeting Windows natively. And even then it was only a partial rewrite: we ended up using MinGW because we had too much of a direct dependency on the pthread API.
Forcing Windows users to use WSL is generally a non-starter. That’s just not how things work. You can force someone to change their entire execution environment without further consequences.
If you only support WSL then you don’t support Windows, imho.
I see were you are coming from, but programs also depend on an SQL server, a python installation or a Java instance. You also don't complain about device drivers, support for filesystems, the network stack and hooks to Windows Explorer.
In the end it is just part of the OS and a bunch of extra userspace programs. I mean nobody complains about the Windows Subsystem for Win32.
But yeah, you can just use a non-MS GNU/Windows implementation instead, do you like that better?
Is it possible though? Is it possible to have isolated WSLs (per programm)?
> Is it possible though? Is it possible to have isolated WSLs (per programm)?
Maybe. But my experience is that there is very little “program code” and it’s mostly “library code”.
And if you did have a program that required WSL and you followed the UNIX model of bash chaining programs then you’re now mandating the “meta program” be run under WSL.
I treat WSL as a hacky workaround for dealing with bad citizens. It’s not a desirable target environment. It exists as a gap stop until someone does it right. YMMV.
Windows and *nix are different platforms. Writing crossplatform code is pretty easy and a solved problem. At least 99.5% of code can be platform agnostic. Only a few tiny bits are special.
My background is video games. Which is well known to be a Windows-first environment. I currently work on robotics. The vast majority of robotics ecosystem code is Unix only.
You know what is extremely useful for teleoperation? Virtual reality. You know what platform doesn’t have a way to do VR currently? Linux. Womp womp sad trombone. Mistakes were made.
It’s really easy and not hard to write crossplatform code. My experience is that Linux devs are by far the most resistant to this. It’s very annoying.
Should you care about Windows? I certainly think so. Linux still doesn’t have a good debugger (no, gdb/lldb) aren’t good. Quite frankly every Linux dev would be more productive if they supported Windows where debuggers exist and are decent. So really they’re just shooting themselves in the foot. IMHO.
There are probably international traders who can tell a similar story with Linux replaced with "the US" and Windows replaced with "Russia", but I wouldn't consider that similar story to be an argument for ending the sanctions on Russia.
However, there is an important difference between the sanctions on Russia and the strategy of the pro-Linux "activist" that started this thread: namely, Windows is so heavily entrenched in the niche of enterprise IT that there is no significant chance of Linux's replacing it in that niche with the result that there is no realistic chance of a positive effect of this activism that might cancel out the negative effect you describe. So, I am tentatively in agreement with you.
I tell my coworkers, "Hey, we need this coded up as a Windows service!" and I get crickets.
So I spin up a Debian VM and POSIX the hell out of it. If they dare to complain, I tell 'em to do their damn jobs and not leave all the hard stuff to the guy that only programs on UNIX.
To be fair to your coworkers, coding a windows service and setting up logging for it is surprisingly complicated. I'm the only person at my place of work that can do it, and even then only if I can use a compile-to-native language or .NET.
That's what I know. I wasn't hired to program Windows machines, I was hired to program PLCs and SCADA systems. But every now and then something best done in .NET comes up and none of the Windows guys can get off their butts to do it. So then they get to whine about having to deal with Linux and I get to tell them to suck it up.
Sure, I could learn to program on Windows... Or I could pick up another PLC or SCADA platform that looks good on my resume. Guess which I choose to do?
That's a nice trick, but contrary to function statics, it is susceptible to SIOF. This kind of optimization is useful only on extraordinarily hot paths, so I wouldn't generally recommend it.
> On ARM, such atomic load incurs a memory barrier---a fairly expensive operation.
Not quite, it is just a load-acquire, which is almost as cheap as a normal load. And on x86 there's no difference.
One thing where both GCC and Clang seem to be quite bad at is code layout: even in the example in the article, the slow path is largely inlined. It would be much better to have just a load, a compare, and a jump to the slow path in a cold section. In my experience, in some rare cases reimplementing the lazy initialization explicitly (especially when it's possible to use a sentinel value, thus doing a single load for both value and guard) did produce a noticeable win.
> That's a nice trick, but contrary to function statics, it is susceptible to SIOF.
For those (like me) who don’t recognize that abbreviation, “The static initialization order fiasco (ISO C++ FAQ) refers to the ambiguity in the order that objects with static storage duration in different translation units are initialized in” (https://en.cppreference.com/w/cpp/language/siof.html)
Yes, thanks for the clarification, what I probably should have said is that the trick is basically syntactic sugar to declare a scoped global static variable, and as such it inherits all the problems of global static variables.
FDO/PGO seem to really improve optimizations for hot/cold functions. I wonder if it does the kind of thing you're suggesting.
Not with any of the Clang versions I tried, but last time I checked it was a couple of years ago.
Funny enough I recently wrote my own hack using this linker feature in C, to implement an array of static counter definitions that can be declared anywhere and then written out (e.g., to prometheus) in one place.
Note that as I later found out, this doesn't work with Mac OS's linker, so you need to use some separate incantations for Mac OS.
I wrote a portable abstraction for this that works across Linux, MacOS, and Windows: https://github.com/protocolbuffers/protobuf/blob/4917ec250d3...
I call them "linker arrays". They are great when you need to globally register a set of things and the order between them isn't significant.
The rabbit hole I just went down is called C/C++ Statement Expressions [1] which are a GCC extension:
The reinterpret_cast<T*>(...) statement is a conventional C++ Expression Statement, but when enclosed in ({}), GCC considers the whole kit and kaboodle a Statement Expression that propagates a value.There is no C equivalent, but in in C++, since C++11 you can achieve the same effect with lambdas:
As noted in the linked SO discussion, this is analogous to a JS Immediately-Invoked Function Expression (IIFE).[1] https://stackoverflow.com/questions/76890861/what-is-called-...
Lambdas are not fully equivalent since return statements in statement expressions will return at the function level, whereas return statements in lambdas will only return at the lambda level.
Just for clarity, since I didn't understand on first reading.
>Even after the static variable has been initialised, the overhead of accessing it is still considerable: a function call to __cxa_guard_acquire(), plus atomic_load_explicit(&__b_guard, memory_order::acquire) in __cxa_guard_acquire().
No. The lock calls are only done during initialization, in case two threads run the initialization concurrently while the guard variable is 0. Once the variable is initialized, this will always be skipped by "je .L3".
Right, I was scratching my head exactly for that reason too. Even if the analysis was correct it would still be a solution for the problem that doesn't exist.
The way block scope statics are handled in C++ is a mistake. Block scope statics that don't depend on any non-static local variables should be initialized when the program starts up. E.g.:
In other words, any static that can be inited at program startup should be, leaving only the troublesome cases that depend on run-time context.I think that would cause some surprising behavior changes for certain code changes, not something that C++ (or any language) needs more of.
Which is why it should have been sorted out decades ago.
In theory I agree, but in practice this just increases compile time by a lot, and we need to be able to manually toggle the code that gets executed at compile time.
What?
Edit: what?
When every eligible piece of code becomes automatically constexpr, as you suggest, the compile time will just balloon. The code still needs to be compiled and executed, just all at once now. Optimization is one of those annoying problems for which we currently don't have the compute to fully bruteforce it. We need to be selective with which code is marked constexpr.
> constexpr, as you suggest
I made no mention of constexpr.
Program start-up isn't compile time.
The idea is that since "static obj bar()" doesn't depend on anything in the function, it could in principle be moved outside of the function. So in actual fact, it can be treated that way by the loading semantics of the program (can be constructed without the function having to be called), except that the name bar is only visible inside the function.
I don't understand why C++ wouldn't have specified it this way going back to 1998, but that's just me.
Why not just use constexpr in the second case?
What if the object is to be mutable?
You use constinit. But this means constexpr constructor (easy with 2-phase init, not too much of a problem for singleton objects), and trivial destruction
Does that guarantee that the static object is constructed before the first execution of the block?
Yes, it will be constructed at compile time. Not everything can be constinit.
TIL about encapsulation symbols.
Why not just use constinit (iff applicable), construct_at, or lessen the cost with -fno-threadsafe-statics?
Looks similar to absl::NoDestructor
https://github.com/abseil/abseil-cpp/blob/master/absl/base/n...
Which is basically the only usage of std::launder I have seen
`NoDestructor` just ensures that the destructor is not called on the wrapped object, but you still need to manage the lifetime. If you look at the example, its recommended usage is with a function static. In other words, it's a utility to implement leaky Meyers' singletons.
std::launder is a bit weird here. Technically it should be used every time you use placement new but access the object by casting the pointer to its storage (which NoDestructor does). However, very little code actually uses it. For example, every implementation of std::optional should use it? But when you do, it actually prevents important compiler optimizations that make std::optional a zero-cost abstraction (or it did last time I looked into this).
std::launder should probably be used more than it is in low-level code if you care about correctness, even though it doesn’t always bite you in the ass. It is a logical no-op. std::launder is a hint to the compiler to forget everything it thinks it knows about the type instance, sort of like marking it “volatile” only for a specific moment in time.
The use of std::launder should be more common than it is, I’ve seen a few bugs in optimized builds when not used, but compilers have been somewhat forgiving about not using it in places you should because it hasn’t always existed. Rigorous code should be using it instead of relying on the leniency of the compiler.
In database engine code it definitely gets used in the storage layers.
FTA:
“Dynamic initialization of a block-scope variable with static storage duration or thread storage duration is performed the first time control passes through its declaration
[…]
this would initialise everything correctly: by the time foo() is called, its b has already been initialised.”
I guess that means this trick can change program behavior, especially if the function containing the static is never called in a program’s execution.
> For this we need a certain old, but little-known feature of UNIX linkers
STOP WRITING NON-PORTABLE CODE YOU BASTARDS.
The correct answer is, as always, “stop using mutable global variables you bastard”.
Signed: someone who is endlessly annoyed with people who incorrectly think Unix is the only platform their code will run on. Write standard C/C++ that doesn’t rely on obscure tricks. Your co-workers will hate you less.
Author here. This is part of SPDK-based server code. Transportability outside of UNIX (mostly Linux) is entirely irrelevant. By the way, the global variables here are immutable after the initialisation, that's the point.
Every time someone ships successful code that's hard to port to Windows the world becomes a better place.
> Every time someone ships successful code that's hard to port to Windows
Until your boss tells you to port your so-far Linux-only code to Windows, and you run that struggle.
Signed, someone who spent the past year or so porting Linux code to Windows and macOS because the business direction changed and the company saw what was the money-maker.
P.S. Not the parent commenter, because I just realised they, too, had a paragraph beginning with 'signed, ...'
Can you setup Windows to install WSL if it isn't there yet and then set it up, in a Windows installer?
We actually tried that at first.
The technical requirement of installing WSL before installing our software was already a non-starter, since most Windows users expect one installer or zip to do-it-all. WSL2 isn't (yet) like a C/C++/DirectX redistributable which can be plugged in as a dependency to a given program. Additionally our program is expected to work natively with Windows paradigms.
More critically, we work with high-performance filesystems. The performance impact of files going a round-trip through the Linux Plan9 driver, then through a Linux kernel context switch, into the kernel, and down into Hyper-V, and then up back through the Windows Plan9 driver was completely unacceptable. It was deemed worthwhile to rewrite targeting Windows natively. And even then it was only a partial rewrite: we ended up using MinGW because we had too much of a direct dependency on the pthread API.
Forcing Windows users to use WSL is generally a non-starter. That’s just not how things work. You can force someone to change their entire execution environment without further consequences.
If you only support WSL then you don’t support Windows, imho.
I see were you are coming from, but programs also depend on an SQL server, a python installation or a Java instance. You also don't complain about device drivers, support for filesystems, the network stack and hooks to Windows Explorer.
In the end it is just part of the OS and a bunch of extra userspace programs. I mean nobody complains about the Windows Subsystem for Win32.
But yeah, you can just use a non-MS GNU/Windows implementation instead, do you like that better?
Is it possible though? Is it possible to have isolated WSLs (per programm)?
> Is it possible though? Is it possible to have isolated WSLs (per programm)?
Maybe. But my experience is that there is very little “program code” and it’s mostly “library code”.
And if you did have a program that required WSL and you followed the UNIX model of bash chaining programs then you’re now mandating the “meta program” be run under WSL.
I treat WSL as a hacky workaround for dealing with bad citizens. It’s not a desirable target environment. It exists as a gap stop until someone does it right. YMMV.
> But my experience is that there is very little “program code” and it’s mostly “library code”.
Sorry I am confused, what does that refer to?
> you’re now mandating the “meta program” be run under WSL
As long as bash and the tools are in path, can't you run any program normally?
> WSL as a hacky workaround for dealing with bad citizens
Yes, but some parts are out of scope for C and you need to target the OS. Also f.e. passing around file descriptors and sockets are convenient.
Looks like you're complaining about too much job security. Also it's you choice to accept that task and make the world a slightly worse place again.
Attitudes like this are the absolute worst. Fellow HN readers, don’t be like this person.
I'm not sure what to think. What argument do you have for your position?
Windows and *nix are different platforms. Writing crossplatform code is pretty easy and a solved problem. At least 99.5% of code can be platform agnostic. Only a few tiny bits are special.
My background is video games. Which is well known to be a Windows-first environment. I currently work on robotics. The vast majority of robotics ecosystem code is Unix only.
You know what is extremely useful for teleoperation? Virtual reality. You know what platform doesn’t have a way to do VR currently? Linux. Womp womp sad trombone. Mistakes were made.
It’s really easy and not hard to write crossplatform code. My experience is that Linux devs are by far the most resistant to this. It’s very annoying.
Should you care about Windows? I certainly think so. Linux still doesn’t have a good debugger (no, gdb/lldb) aren’t good. Quite frankly every Linux dev would be more productive if they supported Windows where debuggers exist and are decent. So really they’re just shooting themselves in the foot. IMHO.
There are probably international traders who can tell a similar story with Linux replaced with "the US" and Windows replaced with "Russia", but I wouldn't consider that similar story to be an argument for ending the sanctions on Russia.
However, there is an important difference between the sanctions on Russia and the strategy of the pro-Linux "activist" that started this thread: namely, Windows is so heavily entrenched in the niche of enterprise IT that there is no significant chance of Linux's replacing it in that niche with the result that there is no realistic chance of a positive effect of this activism that might cancel out the negative effect you describe. So, I am tentatively in agreement with you.
You're complicit with Windows living on, being long overdue in the landfill of computing history.
Disrespectfully disagree.
I tell my coworkers, "Hey, we need this coded up as a Windows service!" and I get crickets.
So I spin up a Debian VM and POSIX the hell out of it. If they dare to complain, I tell 'em to do their damn jobs and not leave all the hard stuff to the guy that only programs on UNIX.
To be fair to your coworkers, coding a windows service and setting up logging for it is surprisingly complicated. I'm the only person at my place of work that can do it, and even then only if I can use a compile-to-native language or .NET.
Their tasks would be less hard if the UNIX guy would stop writing non-portable POSIX code =P
That's what I know. I wasn't hired to program Windows machines, I was hired to program PLCs and SCADA systems. But every now and then something best done in .NET comes up and none of the Windows guys can get off their butts to do it. So then they get to whine about having to deal with Linux and I get to tell them to suck it up.
Sure, I could learn to program on Windows... Or I could pick up another PLC or SCADA platform that looks good on my resume. Guess which I choose to do?
I have nothing to nice to say in response. Good luck!