@ivan @emacsen @cwebber Another reply from my side: https://www.reddit.com/r/freesoftware/comments/fop3bs/breaking_the_agpl/flgrd7s/
The AGPL virality clause does not extend backwards, which means that AGPL code can call functions from non-AGPL libraries.
This confuses me greatly.
Seems to mention only GPLv3 (in section 13). “Non-AGPL” is more than just that, though. There’s no mention of LGPL; but presumably, AGPL assumes you turn LGPL’d code into GPL’d, and then you can consider the combination to be just AGPLv3+GPLv3? And anything else is non-combinable?
I really should read AGPL in even greater detail to understand where the boundaries lie. I’ll assume that making network calls (RPCs? HTTP requests?) towards other code doesn’t constitute linking, but section 1 definitely talks about dynamic linking. But it’s unclear:
However, it does not include the work’s System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
So it’s saying “system libraries/tools/widely available free programs are not part of the work”, and continues going on saying that Corresponding Source does include work’s includefiles, then source code for any shared libraries/dynamically linked programs that the work is designed to require. Which would read to say if your program is designed to require Windows API, then Windows DLLs are considered part of the work/Corresponding Source, despite what the previous sentence says?!
I knew what the license is trying to do, I looked at it in passing in the past, but now I am very confused by it. As someone that likes GPLv2 (and maybe GPLv3), I am kinda scared by AGPL. Presumably I need to read it in great detail?
If DLLs/.so files are a grey area, then config.php is clearly being merged into the built binary (even if a temporary bytecode binary in RAM built by the ‘interpreter’ on-the-fly). Despite its name, it can be a complicated piece of machinery that’s fetching the actual config from other sources.
And it's more obvious why if you come from a background of lisp, especially configuring emacs... which is what I explained in my talk. But almost all the audience was confused.
@emacsen pointed out that Goblins makes this more obvious with the way it handles objects, but that like my emacs example, isn't clear to a non-lisp audience. I think that was the purpose of this post (and its target audience) whereas you already clearly know lisp. :)
It seems that the only way of solving that issue is sharing the code that does the filtering and keeping the actual data in a separate dumb file - or dropping AGPL in some part of your code to communicate over the network with AGPL code, which basically means that you release some of your code under e.g. GPL and let people link against that - therefore losing the AGPL virality clause.
@phoe No, I don’t mean just proprietary stuff.
Even better: as config files in some languages are actually code, they can be interpreted as part of work, and network interaction requires (?) “conveying” “Corresponding Source” to the user.
So you might be unable to combine with GPLv3-incompatible code too. Maybe GPLv2 without the ‘and later’ clause? Maybe CDDL? shrug
Besides, this likely means what you said a few posts ago: a user doing network interaction can likely request the config file.
I'm but a measly c++ guy, all the lisp examples make no sense to me, so obviously I fail too see any kind of fundamental problem here with copyleft, p2p, and the "code = data" thingie.
Yes, if the photograph of a birthmark on your butt that you embedded in the software is a central and irreplaceable part of an algorithm that objectively changes/enhances the software as observed by the rest of the p2p network, then I'm afraid it is only ethical to share that piece of personal information. However otherwise, if it is not so essential, in the version that you share you can replace it with any other image. Nobody sensible will have a problem with that, and nobody malicious will be able to prove that you did that, without braking the law or a Kafkaesque court siding with their demands. That said I'm not a lawyer and I would not claim that the wording in the license is perfect in this regard.
Definitely a practical consideration if such a replacement is difficult in your language or paradigm of choice, but not a fundamental issue I think.
Taking this to extreme you arrive at neural networks, as in "I've trained a neural network that filters emails for me but it also spews out my home address if you ask it nicely". I think it is obvious that neural networks are not source code as far as software freedom is concerned, only the training tools are. The difficulty of training is a separate issue, that is solved/mitigated by decentralization.
I updated the post last night at around 3am EST (if that gives you a sense of how well I'm sleeping these days)
I mentioned AI explicitly,. and the challenge there is that if reproducible builds are hard to do generally, with AI they're impractical or sometimes impossible.
Sure it is impractical if you try to do it on every node of the p2p network, but if you have any kind of trust mechanism in place, the network can collaboratively build it for everyone, and distribute it in that built/trained state(along with all the tools, which I guess they would have anyway as they took part in the process). Software freedom does not imply isolation, and this is true in much more general sense. As an individual I might not care if you provide me with source code to build something myself as long as you provide it to my trusted dev team(be that local independent organization, government organization, international organization or my homies on darknet) that can do it for me personally if I can afford it, or for the community they maintain/support which I'm a part of. A collective will always have this kind of advantages over an individual.
Not being physically able to build it on your own is not a reason to consider what is essentially machine code - source code.
Another similar case would be renting a supercomputer that's running free software. Same exact software with same exact settings might take several lifetimes to yield any results on my own machine, it is for all practical purposes impossible to use, but that doesn't mean that it violates freedom 0.
I would be surprised if AGPL does not accomodate things like that.
@namark @emacsen @cwebber @phoe @ivan The issue is that if you train the same neural network topology twice on the same training data, you will probably not get the same trained network — not only because you need to record PRNG seeds, but because nondeterministic execution order in massively parallel hardware like GPUs can affect the final results. It's not just that you need a big machine to reproduce the results, but that the results are produced by a nondeterministic process.
As far as I know you also can not guarantee that it does what you want it to do (or anything at all), outside of statistics, so if it's slightly different but still covers the same percentage of cases does it really matter for software freedom? I guess it might matter for forensics, but that's beside the point.
Can you just randomly stumble upon a substantially better neural network? There has got to be a method even to that madness, it can't just be purely random brute force. I would not accept our AI overlord if it's nicknamed "jackpot"!
I feel that we've veered way off track of my original point, which was just that we can't simply argue that the solution to shipping software around is always going to be to ship source code. It's a nice idea in theory, but it bangs up against practical limitations, thus we'll eventually need to ship binaries around, and once we do that, we need to think of what the implications of that are in an AGPL-ed world.
I see, I guess I'm missing the context where "just ship the source" was suggested as solution to shipping software. We have been shipping binaries around already, along with source code. Could do it better of course.
Otherwise, my (possibly off topic as well?) point was, that in my opinion, in practice, under AGPL, as a node in a p2p network, you don't have to provide binaries or config files with sensitive information in them, unless this sensitive information is miraculously central to the source code of the program and its effects can be observed by others on the network.
I think that concern is also addressed by my suggestion of a transparent build/training process by a trusted entity or community, and such backdoors or otherwise unwanted behavior is exactly why you can't consider a neural network(the extreme case of "my data is my code") source code.
What I meant by forensics, where small(statistically insignificant) differences might matter, is something like "there has been an incident that destroyed the entire system, and we need to figure out whether it was the AI's fault or not, by exactly recreating the situation". While perhaps a valid concern, it's not relevant in context of software freedom.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!