×

Why does everything gets removed here? by o82 in golang

[–]jerf 9 points10 points  (0 children)

And... honestly, looking ahead a year or two into the future... I don't know if this is winnable.

The entire point of Reddit in general is for humans to speak to humans.

I don't say that because I am some sort of fundamentalist anti-AI person (although if you feel that way, I have no objection to you, we may all be joining you before this is over), I say that because if you want to speak to an AI, you're way better off doing that through the chat AI interface we've all seen. If you're going to speak to an AI you might as well do it as a back-and-forth in real time rather than waiting for an AI spammer to wait long enough to pretend that they're not a machine. And then you avoid all the other dishonest manipulations they do to pretend to be human. Just speak to an AI directly. Reddit is a terrible interaction interface for a human to speak to an AI. It was built for human-to-human interaction.

But... dang, man. We're to the point that it would itself be a "small project" to write a bot that not only posts spam to /r/golang, but accompanies it with an entire GitHub repo implementing yet another reverse HTTP proxy or load balancer... and it could do this every hour, without cease, blocked only by someone's willingness to spend the dough for the tokens. It's so easy to build a spam bot now that you can almost accidentally build one with whatever moltbot is called now just by obliquely referencing a desire that the bot decides might be fulfilled by such a thing.

Reddit doesn't really have any value if it's not humans speaking to humans. AIs speaking to AI is just noise (at least as long as they're all on an LLM architecture that can't really learn anything once a context window fills) and humans speaking to AI is just a roundabout and deceptive interface as described above. It really ought to be for humans to speak to humans. But I have no idea how to make that happen in another year or two of AI development, where spamming becomes so easy spammers don't even hardly need a reason to do it anymore because it's as easy as sneezing, and they're that much better at passing as humans.

Why does everything gets removed here? by o82 in golang

[–]jerf[M] 38 points39 points  (0 children)

If I had to guess, what you saw as a "discussion post" that was removed was a post from a karma-farming bot account. We're getting one or two posts a week from bots that post highly-general "Gee, golly, Go is pretty great, I used it to do $GENERIC_GO_THING and it was so much better than (usually Node but sometimes some other common scripting language), hey guys isn't Go great although I had this one problem with it, what do you use Go for?"

Then, if you go to the user posting that, they're posting similar things across dozens of subreddits, about one or two a day. The Go code with generic comments about Go also has generic comments about rock climbing equipment, how much they love some specific anime in a very generic way, a couple of actively contradictory political opinions, posts about knitting, football, a couple of reddits for certain geographical locations, a fetish or two, just dozens upon dozens of unrelated subreddits, all completely generic "Hey I love thing except for this one common controversial topic that my algorithms have shown yield engagement, what do you all think?" posts.

Those are getting removed for two reasons:

  1. They're spam.
  2. I find it offensive to use AIs, with their large and ever-growing output capabilities, to sink human attention like that. Valuable human attention burned to farm Reddit karma is a super crappy deal for the world.

Thing removed by the mods also do get a reason posted. (With occasional rare exceptions, which are mostly the posts so baffling I don't even know what to say, like a recent post of the form "RCERHHHHHHHH CTED". Why the rather twitchy Reddit filters didn't see a problem with that from a brand new account is beyond me.) If you don't see a reason posted it may have been removed by Reddit's spam filters itself. They're clearly trying to fight these guys too... and they're clearly hitting some real accounts in the crossfire sometimes too, because this gets pretty hard to pick up on. They definitely have stuff that will remove a post after it has been up and available for a bit.

The other thing is, as other posters are pointing out, the flood of projects that prompted this post (and note [Deleted] means the author removed it, or maybe Reddit, but mods didn't) has, if anything, gotten worse. I'm tempted to route all projects to an approval process at this point, which would mitigate the bulk of the things that are up for a bit and disappear. I don't want to do that, but at the same time, similar to reason 2 above, "I pushed a few things into a text box and turned it into code, may I please sink your valuable human attention into 'reviewing' the output of my AI?" is not a good deal for the community.

(An interesting note about /r/golang: Those bots I mentioned up top, when I look at their posts in other subs, they're often +50-ish. Here they're pinned to zero, pick up a report or two, and have a top-voted comment complaining about the AI. For better or worse, /r/golang is hyper sensitive to AI posting, way ahead of the curve of other communities picking up on it. Perhaps too sensitive. There's been some false positives. But I do find the difference interesting.)

The error handling bugs that worry me aren't the ones that crash by ___oe in golang

[–]jerf 0 points1 point  (0 children)

Why not?

Because of what I said. A mutable language means that if a process crashes you do not know what state the rest of the system is in. To take a traditional transactional example, you may have done the deduction from one account but crashed before you did the addition to another, orphaning the money in the transaction. A defer may result in a lock being released in a panic but that doesn't mean that the lock was supposed to have been released, unless you are careful about how you use them.

Erlang is built to be much better at that.

But if you're going to go "but what if in Erlang you...", the answer, yes, Erlang is only better at it. "Just crash" has its limits in Erlang, too. But the limits are even sharper and stronger in an imperative langauge with a conventional shared memory space. You can not blindly translate guarantees Erlang makes out of the context it makes them in.

Do you actually check the error for crypto/rand.Read? by Existing-Search3853 in golang

[–]jerf 1 point2 points  (0 children)

Conformance to the io.Reader interface. There are several other similar calls that can't fail but have an error in them to conform to the interface, such as a bytes.Buffer .Write call. It can't fail (even running out of memory IIRC is not an error) but it has the error so it conforms to io.Writer.

If functions could be overloaded in Go we could have a .Write([]byte) int and a .Write([]byte) (int, error) but we don't.

Meetup Go & Robotics in Arcueil (south Paris) 11th march by golangparis in golang

[–]jerf[M] 1 point2 points  (0 children)

Can you please edit your text and use normal-sized text? This becomes an arms race where everyone starts using big text if it's not stopped. (I'd remove this and ask you to repost but the Reddit spam algorithm gets understandably frosty about the repost.)

Singleton with state per thread/goroutine by SnooSongs6758 in golang

[–]jerf 15 points16 points  (0 children)

Database transactions are an exception and you don't really get a choice. You can't isolate database transactions in components. All the components have to know about transactions and it's just something you have to deal with.

The problem is, transactions often don't match on to any other structure in your architecture. They cut across structured programming boundaries. They cut across thread boundaries. They cut across hexagonal components. They cut across the layers I use in my own architecture. They cut across everything. I have never seen a "proper" architecture solution to this that doesn't break under some perfectly reasonable scenario that a design simply has to accommodate.

The error handling bugs that worry me aren't the ones that crash by ___oe in golang

[–]jerf 0 points1 point  (0 children)

Erlang is designed to be relatively safe when a process crashes. I say "relatively" because it still isn't safe. But with immutable variables all locked within their processes, and the "ports" system making it so that you can reliably tie cleanup of things like sockets to other processes crashing, it is a lot safer to just crash.

As a fairly standard imperative mutable language with threading, those arguments don't apply as well to Go. The runtime's default response to an escaping panic that isn't recovered before it gets to the top of the call stack is to terminate the program for a reason. A recover call in Go isn't just a statement that you don't want the whole process to crash, it is a statement to the runtime that it is safe to continue on. Whatever resources are cleaned up, locks are not being spuriously held by the crashing process, etc.

You can't take Erlang's philosophy and translate it out to languages that don't have its accommodations for crashing and expect the same results.

Is it good practice to use sort package in Golang while leaning DSA? by SuitableArcher3732 in golang

[–]jerf 0 points1 point  (0 children)

Is anyone still asking Leetcode questions for high-end tech jobs?

It's been stupid for a long time but in an era of "Claude give me a generic trinary tree sorted by an arbitrary field in the data type with code written in COBOL and topped with a cherry and including a sonnet about the beauty of trees in the comments", it's even stupider.

I'm trying to create a map that hold two data types by PeterHickman in golang

[–]jerf 0 points1 point  (0 children)

See the code I linked in my other comment.

I believe the idea is that they don't want a map per type. In the use case I'm talking about, at the point in time when I add this map I don't know what types will be put in it.

I'm trying to create a map that hold two data types by PeterHickman in golang

[–]jerf 1 point2 points  (0 children)

Not the OP, but the use case I had was a way to share information between a whole bunch of loosely-connected plugin-like components (think dozens, not just a couple), where I wanted them to have "a way to share data" that was still strongly typed but didn't require the machinery providing the map itself to need to know all the types it would contain in advance.

I'm trying to create a map that hold two data types by PeterHickman in golang

[–]jerf 6 points7 points  (0 children)

First, the pipe operator is not a sum type. You aren't really getting a "give me a string or a float64" with that. You can ignore the "what to do instead" part of that blog post because you're not doing the usual thing with the pipe, and I can instead directly give you a map that has multiple value types, yet is type-safe. And has a reasonably convenient interface. You do have to create new types for the keys, but I think in practice that's not so bad.

Yaml Parser for go or any languages built from the the YAML 1.2 spec by [deleted] in golang

[–]jerf 0 points1 point  (0 children)

I'm just emoting here, can't rigorously prove this feeling, but this doesn't feel stable enough to use, being in a directory called "gen". That probably applies across the board to all the languages. I think I would like some sort of versioning scheme that is document. (Which is going to cause you some pain because the languages have different ways of versioning.) Plus we'd expect it to be usable from a github reference, so, with a package name that isn't "main" and doesn't have a main function in it.

Also you're going to need to work with some target languages, which may or may not include Go, to get these included into the libraries we actually use, like https://github.com/goccy/go-yaml . I think most languages have something more useful wrapped around the parsers than raw tokens and as such I would not use this directly versus one of the existing libraries... and, again, I seriously doubt this is Go-specific feedback, I expect this is true of most modern languages.

Why does Go perform worse than Nodejs in this test? by Minimum-Ad7352 in golang

[–]jerf 7 points8 points  (0 children)

Everything uses an event loop architecture now, including Go. That has nothing to do with it.

So I followed that link, thinking that when you said there aren't any C, C++, or Rust frameworks that you meant they showed up weirdly low in the list, but, no, you are literally correct. They aren't there at all. I don't know why that is, but it seems they were literally not tested. Or perhaps just not tested yet? Dunno.

So you may find it more helpful to look to the previous iteration for performance data.

Honestly, I don't pay much attention to TechEmpower benchmarks. The inevitable result of any large-scale benchmarking is that it will explore the limits of the benchmarking methodology rather than the performance of the tools. For example, you can be a "python" system but if you're allowed to use things like Cython you can write in what is basically C. Does the benchmark allow that? Does the benchmark forbid that? The truth is that for either answer, the result is not a realistic benchmark of the Python performance stack in the end.

I find it much more interesting to ask, "if I put a normal amount of effort into the code and write idiomatically for a language, what performance can I expect?", which is something that is actually not all that hard to figure out but is really hard to codify into a benchmark. By that standard, Go is faster than Node by a comfortable margin, though not as enormous as the one between, say, Go and Python, and Go is not the fastest thing in the world but it is also comfortably close to it for the vast majority of (but not all!) uses.

How we reduced the size of our Agent Go binaries by up to 77% by Hemithec0nyx in golang

[–]jerf 22 points23 points  (0 children)

I don't know much about their package but there is repeated reference to "binaries", as in, plural. Another technique you can use is to build an all-in-one binary that uses subcommands to dispatch out to whatever the current binaries are doing. This means that any given bit of Go code shared between binaries still ends up in the package only once. There are many ways to retain reverse compatibility with any previous binaries shipped out.

OSes are pretty good nowadays about handling this sort of thing, it is unlikely to be much less performant than multiple binaries and it can actually be more performant (use less memory overall, actually start up faster) in some common scenarios.

Why does Go perform worse than Nodejs in this test? by Minimum-Ad7352 in golang

[–]jerf 61 points62 points  (0 children)

The Node web server is implemented in C. When you have very small amounts of Javascript code for your handler, you're running a web server that runs at C speed, and a tiny amount of Javascript. In that case you're not comparing Go and C, but Go's net/http and C's implementation of a web server. In that case it should not be surprising that Node can eke out a win.

On the one hand, this is real performance, if that is your use case. If you have some small bit of Javascript you need to run, for instance, you've got something that is just glue to another system, you will get this performance.

On the other hand, this is not Node's performance; this is C's performance. If you use a benchmark like this to conclude "Node is faster than Go" and then implement a large system with lots of Javascript that does lots of stuff, you will be disappointed to find out that the performance difference quite rapidly reverses as you write code. Go's net/http may be a bit slower than Node's C webserver, but as you write more Go you will continue to be writing in a fast (compared to JS) language and you don't automatically pay some huge "penalty" for writing lots of code. The code you write continues to run at Go's speed. Plus I'm sure Node's webserver can handle requests concurrenty, so running lots of tiny JS bits maximizes the amount of time the Node server is in concurrent code, but once you get to JS you're back to single-threaded operation, whereas Go will continue to be concurrent even in your own code.

The upshot of all of this is, as is so often the case, the benchmark is real, but it isn't saying what you think it is.

(Another common case we get here is "Why is this [C#/Java/Python/Whatever] web thingie going faster than Go?", when both the languages are using some framework that does all kinds of stuff and the handler is non-trivial. In that case the answer is, you're not benchmarking "Go" verses "Your Language", you're benchmarking the entire stack against the entire stack, and there are all kinds of reasons why one might be faster than the other despite some particular component, including possibly the implementation language, being slower. People often think they're benchmarking "the language" when in fact they are benchmarking "the language" + a whackload of other things that can easily overwhelm "the language". This particular case with Node is kind of the opposite; you're benchmarking some other language entirely plus a very tiny thing that is easily overwhelmed by the other language's code.)

Password reset flow in Let’s Go Further by Minimum-Ad7352 in golang

[–]jerf 14 points15 points  (0 children)

This one is a matter of some debate within security circles. It is technically a leak, yes. However it also is a lot nicer for the user. How that balances out depends on your service. OnlyFans, for instance, shouldn't leak this info out. But some hobby thing that uses emails and nobody in the world would care whether or not someone was there, you can make a case for the friendlier error message. You could hire 10 security professionals in that case and get an even split of opinion, which is what I mean by "can make a case for". There isn't total consensus by security professionals in the "less important" case.

Is manual memory management possible with syscalls? by doublefreepointer in golang

[–]jerf 3 points4 points  (0 children)

Any memory manager making tons and tons of syscalls is broken anyhow because it shouldn't be allocating tons and tons of little tiny spaces from the operating system. The cost of a syscall to get any non-trivial amount of memory will be utterly dwarfed by the cost of the code that will be using that space.

Syscalls are slow is not really a meaningful statement. In this context, they're not slow in a way that matters because you're not making tens of thousands of calls per second to allocate memory anyhow.

Is it okay to type alias to an internal package? (and approaches on large packages and encapsulation) by VolatileDove in golang

[–]jerf 4 points5 points  (0 children)

There isn't anything intrinsically "wrong" with revealing a type from an internal package, in contrast to how wrong it would be for a type alias to penetrate whether or not something was exported. It was your type to start with, and your decision to export it. It's an odd way to do it and I'd avoid it for reasons of taste, but it doesn't break any promises the language makes. There are other ways types from an internal package may end up in a user's hand, depending on how you use them.

Internal isn't really a security boundary. I think its primary utility is in "I need this to be exported for various reasons in my code but I don't want it in my godoc documentation". Internal isn't really for encapsulation, it's for helping control your documentation so you can tighten it down to just what you want to expose to people. I think there's a lot of people reflexively using it when they don't really need to; I consider it more an exceptional case that you only use for a specific reason rather than something that should be automatically reached for. In most cases, "what I want to export" and "what I should be documenting" overlap fairly substantially anyhow.

That said I also don't consider it a huge mistake necessarily to use internal when it isn't really called for. It's easy enough to change your mind later, and in an era of agentic AI, easier than ever to flip back and forth if you change your mind.

Tips on optimizing my website's backend by Echoes1996 in golang

[–]jerf 3 points4 points  (0 children)

I don't see any a priori reason to assume Go is faster.

A couple have observed that you should profile the Go codebase, but I'd say you should take a profile of both. Even if one is faster than the other you may find that there is still a component of the slower system that is faster than the faster system. If your goal is to speed up things in general it wouldn't be that suprising that there's a cheap win you can have on the C# side.

My first guess based on what you say is html/template but only a profile could tell.

proxy-pkcs11 - TLS forward proxy for PKCS#11 hardware tokens by leolorenzato in golang

[–]jerf[M] [score hidden] stickied commentlocked comment (0 children)

Please post this into the pinned Small Projects thread for the week.

Structured concurrency & Go by sigmoia in golang

[–]jerf 7 points8 points  (0 children)

You can find several other structured concurrency packages. Not everything that hits in that search is about structured concurrency in the sense meant in the post, but there's a few others, like parallel and nursery.

Although in the end, it really isn't that hard to do structured concurrency on your own. No library can fix that the language can't guarantee it, but we do a lot of things in our code bases that langauges can't guarantee by hand. There's always something else the languages can't guarantee.

Go vs Rust for long-term systems/finance infrastructure, is focusing on both the smarter path? by wpsnappy in golang

[–]jerf 18 points19 points  (0 children)

In a finance system most of the math is going to be addition and subtraction. It's not like you're doing matrix math and you need vast swathes of polymorphic code that is generic across different types of matrices. Your ML is going to be happening in not-Go and not-Rust anyhow, regardless of which you pick. I don't think Rust is going to offer a big advantage here, even in the final system.

And I'd say Go is going to be easier to build "complex business models" with... it's nice that Rust is completely 100% anal about things like memory correctness when you need it, but when you don't really need it because the code you are writing isn't really all that inclined to be super complicated anyhow (you are most likely going to be writing lots of things that 1. load stuff from a database 2. compute whatever within one goroutine 3. update the database with the results and in that case the database is doing all the heavy lifting for transactionality anyhow, and you should depend on your database for that), it persistently gets in your way and can become a real pain. And goodness help you in Rust if you decide to go async (or get forced to by some library you need), because unless you're going to high-frequency trading, you don't need the extra overhead of having to manage the concurrency yourself. Go's no-color functions are what you really want. Async is a big trap for you here. Go's way is much better for you.

Finally, consider the costs of having two languages. It means you need people in the future who understand both. I seriously, seriously doubt this calls for mixing languages. And that advice stands even if you do ultimately go for Rust; don't mix in Go, or anything else, unnecessarily then either.

Still, I say this not just as a Go partisan, but in general, I would honestly consider Rust weakly counterindicated for the use case you are talking about here. To put my full set of biases out front and center, I would rate C# comfortably above Rust here, for instance. Rust is going to force you to spend lots of time reassuring it about sorts of correctness that you don't actually care about. The sort of correctness Go or C# help you toward is the stuff you do care about. In fact, of the static languages for you, I would put Rust near the bottom. However I would use Rust over any dynamically-typed language, including something like Typescript or some other static gloss over a fundamentally-dynamic foundation. You probably can't get out of using Python for your ML but I would minimize Python to just the ML and literally nothing else.

Ran a proper audit of what our AI tools have been generating in Go and the patterns surprised me by Smooth-Machine5486 in golang

[–]jerf 5 points6 points  (0 children)

I extensively use types to represent invariants in the code. "Invariants" can sound very computer science-y and obscure, but it can be as simple as "If you are passed a Username you are guaranteed that it has already been validated as being a 'valid' username by whatever the local definition of 'valid' is." It means that the rest of the code has no need to do the validation; indeed it is wrong to have any internal validation because any mismatch between that internal validation and the real validation creates a discontinuity that will at least create the opportunity for bugs and possibly security issues. If you want to change the definition of "valid" change it in the one place where it is defined. If you need another local definition, you create a new type to represent it.

AI can sort of use these types. However, it has a distinct tendency to want to crack into the types and add methods that break the invariants, like, "Hey, I'm loading usernames from some new source I've never loaded them before, let me just add a method

func (u *Username) Set(username string) { u.username = username }

on in there for you."

No. That is exactly what the username is not doing. Use the existing construction function and handle errors appropriately.

Which in a situation like this, may be to bail out at the first invalid username, but can also be "collect all the errors with errors.Join, do what you can, and return the entire error at the end". The AI has an extremely strong bias to just bailing on the first error regardless, because that's pretty much all the training data does. I can deal with that, though. That is arguably the "default" handling if I don't specify anything else.

It does also tend to sometimes try to splat down some validation code inside the program even so sometimes, because unfortunately, while I consider this a relatively basic technique upon which we build other ways of building good, strong programs, it is not really all that well-represented in the training data.

And I do not believe I have yet seen an AI spontaneously realize "Oh, I have an opportunity for a new type like this", even when it is working in a code base saturated in this pattern. If anything, like I said, it has a strong tendency to try to destroy this code if I don't stop it.

What I would say is that right now, AIs are great at blasting out scripting code. Which they are... when I am writing a throwaway script where I don't really care what happens to the code once it is run the one time I will run it, wow is it ever an advantage. I can write scripts in 10 minutes now that I would have quailed at writing at all in the Before AI Time.

But AI is always writing scripting code. It always wants to load some things in as strings, bash them around, fling around booleans and ints with wild abandon, and generally write the sort of code that will break down as it scales up. And AIs, being finite, will still choke on the code as it scales up, as many, many people have noticed. It blasts lines of code out so quickly that it overwhelms its own improvements in handling large code bases in as little as a week or two if you try to just vibe code without guiding it. The AIs still need good code bases, but they don't generate them yet.

My current task is wrangling an AI to try to turn certain permissions that exist in a MySQL database in a complicated schema into something that a different type of system can implement. I've had to correct it several times to, for instance, use the functions the internal project has defined for getting these things, which defines some rather non-trivial logic. It wants to use SQL directly. It is reluctant to connect to the database in the correct way the code wants. All of these things are overcome with prompting. This is actually still a fine tool for me, but I definitely think if your average interaction with an AI tool right now is 1. Make a prompt 2. Yeah, that's probably good enough, ship it, then you are going to mess your code base up something fierce. This particular thing isn't in Go but this seems a general AI trend.

Ask me again in a year. Who knows. I have to admit I'm not sure this will actually improve though, because like I said, the training data just doesn't look like this. One of the AI advances I'm hoping to see is something that allows them to train on a lot less data, so we can create curated sets of training data of just high-quality code instead of pouring every syntactically-correct (if that!) bit of Go code we can find in the entire world into the AI, because code quality seems generally distributed on a power law and the average bit of code's quality is actually pretty bad. I'd like an AI I can train on the best of the best rather than the average.

GoFast v2 - CLI builder for Go + ConnectRPC ( + optional SvelteKit / Tanstack Start) [self-promo] by Bl4ckBe4rIt in golang

[–]jerf[M] 3 points4 points  (0 children)

In the future I'd prefer a direct link to the Github page. The landing page that is the target of the link is difficult to extract an understanding of what this really is from. Which you may want to take as feedback on that page... it's really too minimalistic. That page is basically a "call to action" ("click here to start building") but the page gives me almost no reason for any particular action.