

Oh weird, I would not have expected to be in the minority there
Oh weird, I would not have expected to be in the minority there
It’s about halfway there I think, they still show up separately in clients and have separate comments threads.
Lemmy needs some sort of built-in way to merge them. That’d be the best solution I think. Then you could just pick a list of relevant communities and it’d be pretty seamless
I don’t know of any off the top of my head, but with a cheap digital caliper and tinkercad, I assume you’d be able to model one fairly trivially. You could friction-fit two halves around the cable, and secure it with some simple adhesive, or some kind of simple bolt/nut fastener mount if you wanted to get clever.
Never not learn a new skill!
The canvas API needs specific access to hardware that isn’t usually available via browser APIs. It’s usually harder to get specific capability information from a user’s GPU for example. The canvas API needs capability information to decide how to draw objects across differently capable hardware, and those extra data points make it that much easier to uniquely identify a user. The more data points you can collect, the more unique each visitor is.
Here’s a good utility from the EFF to demonstrate the concept if you or anyone else is curious.
Just think, an extra long shirt can cover that hole, and we could embed a flexible display, wifi module, and a camera in the extra space. This could scan the faces of those around you, and display personalized ads! This is an excellent solution to the hole in your pants, and frankly, the only secure one.
You’re correct that nesting namespaces is unlikely to introduce measurable performance degradation. For performance, I was thinking mostly in the nested virtual network stack adding latency. Both docker and lxc run their own virtual interfaces.
There’s also the issue of running nested apparmor, selinux, and/or seccomp checks on processes in the child containers. I know that single instances of those are often enough to kill performance on highly latency sensitive applications (SAP netweaver is the example that comes to mind) so I would imagine two instances of those checks would exacerbate those concerns.
There are security performance and capability concerns with that approach, apparmor on the first layer lxc probably being the most annoying.
If you want to isolate your docker sandbox from your main host, you should use a vm not a container.
I’ve always wondered why board partners didn’t just raise to scalper prices and take a $2200 profit per card sold.
And tbh, it’s Nvidia’s fault that the partners don’t have enough dies, I’d much rather a partner take the margin than an unnecessary middleman.
When you’re the size of LMG you don’t hire investigative law firms for PR; you do it for liability. The goal is to limit corporate liability by removing individuals likely to get you sued, and most importantly to distance leadership from it with plausible deniability. The firm also has its own reputation to consider, and wouldn’t let a client get away with materially misrepresenting their results.
I don’t think its unreasonable to suggest that a positive finding from an investigative firm is evidence to support their position that they, materially, did nothing wrong. The fact that no one was fired as a result of that investigation is a good sign externally, as it would open them up to more liability if they knew about it and did nothing.
The source to this compat library is in their sources last I checked, but because it’s not part of their standard repos it doesn’t technically have to be. I suspect this is eventually the end-goal.
A lot of industries are semi-forced into it. Let me give you an example I know of first-hand. Modern SAP stacks support 3 operating systems. Windows Server, RHEL, and SuSE.
You’re probably thinking to yourself: “but rhel is just regular linux, surely you can install it on anything if you have the appropriate dependencies, I’ll bet it even just works on rhel-compatibles like rocky, alma, or centos stream!”
And you would be ~sort of~ right, but wrong in the most dystopian way possible. The installer itself does hardcoded checks for “compatible” operating systems, using /etc/os-release and a few other common system files. Spoofing those to rhel 8.5 or whatever is easy enough, but the one that really gets you is a dependency for compat-glibc-X.Y-ZZZZ.x86_64. This “glibc compatibility library” is conveniently only accessible via a super special redhat repository granted by a super special sap license (which is like ~$2,000/year/cpu). Looking at the redhat sources it is actually just a bog-standard semi-modern glibc compile with nothing special. The only other thing you get with this license as far as I can tell is another metapackage that installs dependencies, and makes a few kernel tweaks recommended by SAP.
So you can install it on alma/rocky by impersonating rhel in /etc/os-release, and then compiling a version of glibc and linking it in a special hardcoded location, but SAP/Redhat put as many roadblocks in your way as possible to do this. It took me weeks of reverse-engineering the installer to get our farm off of the ~100k/yr that redhat wanted to charge us for essentially:
./configure --enable-bootstrap --enable-languages=c,c++,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --enable-plugin --with-linker-hash-style=gnu --enable-initfini-array --disable-libquadmath --disable-libsanitizer --disable-libvtv --disable-libgomp --disable-libitm --disable-libssp --disable-libatomic --disable-libcilkrts --without-isl --disable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 9.1.1 20190605 (Red Hat 9.1.1-2) (GCC)
definitely worth $100,000/yr… much capitalism, many line go up
The difference is I (the contributor of content) have the same access as anyone else to the data, and could use it for my own purposes if I wanted to.
On a platform like reddit, access to the raw data is controlled and cannot be format shifted / used in any way I wanted to.
There’s nothing preventing you from forking a Lemmy client or server to prototype this. Depending on how you implement the activitypub backend, you might be able to make it transparent to a user if you present an algorithm as an array of cross posts via a /c/ of a server.
Anything more might require forking a client, which might be easier to implement but may be harder to convince a large userbase to migrate to.
I use ansible on one of my side projects; I use puppet at work. It’s the same reason I use raw docker and not rancher+rke2… it’s not about learning the abstractions; it’s about learning the fundamentals. If I wanted a simple abstraction I’d have deployed truenas and Linuxsserver containers instead of Taco Bell programming everything myself.
Sure. I have an r630 that is configured as an NFS server and a docker host called vacuum. There is a script called install_vacuum.sh that with a single command, can build the server to my spec from a base install of Ubuntu 24.04. it has functions to install base packages from repositories, add new repositories, set up users, create config files for NFS, smb, fstab, crontab, etc… once an NFS server exists on my network, any other server could be my docker host. My docker host is set up from a script install_containers.sh. as with before, it does all the things to get me a basic docker host, firewalled, and configured for persistence via my NFS server. It also has functions to create and start docker containers for all of my workflows (Plex, webserver, CA, etc), and if those containers don’t exist, it will build a docker image for said workflow based on a standardized format (you guessed it) bash build script for the containers. There is automation via cron on whatever host runs docker to build and update the containers once a week, bare-metal servers update themselves nightly, rebooting when necessary via unattended-upgrades.
Basically, you break everything down into the simplest function possible, have everything defined via variables in shared configurations that everything sources before running, and you have higher and higher level functions call other functions until you have a single function that cascades into a functioning system. Does that make sense?
Have you started collecting your notes into scripts?
I think you’ve convinced me that it’s a slightly more complicated problem than I initially gave it credit for; thank you for that!
I think you could solve for the disparate community theme problem by also requiring title match for mergers. You could probably also solve for it by having a 2-way merger whitelist on links. E.g community A and B both maintain lists of “similar” communities and then if A’s list contains B and vice-versa they would merge.
Comment moderation I got nothing though. That’s a tough one.