• 0 Posts
  • 19 Comments
Joined 1 month ago
cake
Cake day: January 26th, 2025

help-circle

  • Yeah, the cache hierarchy is behaving kinda wonky lately. Many AI workloads (and that’s what’s driving development lately) are constrained by bandwidth, and cache will only help you with a part of that. Cache will help with repeated access, not as much with streaming access to datasets much larger than the cache (i.e. many current AI models).

    Intel already tried selling CPUs with both on-package HBM and slotted DDR-RAM. No one wanted it, as the performance gains of the expensive HBM evaporated completely as soon as you touched memory out-of-package. (Assuming workloads bound by memory bandwidth, which currently dominate the compute market)

    To get good performance out of that, you may need to explicitly code the memory transfers to enable prefetch (preferably asynchronous) from the slower memory into the faster, á la classic GPU programming. YMMW.





  • Apparently AMD couldn’t make the signal integrity work out with socketed RAM. (source: LTT video with Framework CEO)

    IMHO: Up until now, using soldered RAM was lazy and cheap bullshit. But I do think we are at the limit of what’s reasonable to do over socketed RAM. In high performance datacenter applications, socketed RAM is on it’s way out (see: MI300A, Grace-{Hopper,Blackwell},Xeon Max), with onboard memory gaining ground. I think we’ll see the same trend on consumer stuff as well. Requirements on memory bandwidth and latency are going up with recent trends like powerful integrated graphics and AI-slop, and socketed RAM simply won’t work.

    It’s sad, but in a few generations I think only the lower end consumer CPUs will be possible to use with socketed RAM. I’m betting the high performance consumer CPUs will require not only soldered, but on-board RAM.

    Finally, some Grace Hopper to make everyone happy: https://youtube.com/watch?v=gYqF6-h9Cvg




  • Well, I’d just go for a reverse proxy I guess. If you are lazy, just expose it as an ip without any dns. For working DNS, you can just add a public A-record for the local IP of the Pi. For certs, you can’t rely on the default http-method that letsencrypt use, you’ll need to do it via DNS or wildcards or something.

    But the thing is, as your traffic is on a VPN, you can fuck up DNS and TLS and Auth all you want without getting pwnd.



  • I’d recommend setting up a VPN, like tailscale. The internet is an evil place where everyone hates you and a single tiny mistake will mess you up. Remove risk and enjoy the hobby more.

    Some people will argue that serving stuff on open ports to the public internet is fine. They are not wrong, but don’t do it until you know, understand and accept the risks.(’normal_distribution_meme.pbm’)

    Remember, risk is ’probability’ times ’shitshow’, and other people can, in general, only help you determine the probability.



  • I don’t believe those MBA types should be in the discussion at this level at all.

    That’s the thing. They are in the discussion. It doesn’t matter what we think about it. If touching Rust risks yielding lower profits this quarter, it’s an automatic ”fuck off you filthy hobbyists”. Even having the discussion costs money.

    Rust in the kernel isn’t about technology, it’s about economics and risk management. I’d like to see the discussion move on from ”C bad unsafe rust gud typesaf” to a level where the suggested benefits of Rust are made clear to the people holding the bags of money, preferably presenting some actual monetary benefits. (Oh, and to make things worse, there are thousands of different stakeholders, with different interests, many of which are in conflict. Good luck!)

    So yeah, I get that you don’t care about it. But you probably should.


  • I’m still kind of on the fence about Rust in the kernel. Linux isn’t some random hobby project, there are serious people working for serious companies in the project. Rust has a clear value proposition w.r.t. it’s qualities as a language, but I don’t think it’s as clear on a system level.

    Say I’m working for a large company as a dev, maintaining a subsystem (let’s say a driver). Letting other people (filthy casual hobbyists) mess around with their filthy type safety will eventually spill into my subsystem and cause extra work. I don’t want the extra work, I just want to have my driver working and then go home. And even if I’m okay with the extra work, my boss won’t be. Even the risk of extra costs down the line will be enough for some to shut it down completely.

    There are boring people working for huge corporations with huge stakes in the Linux kernel. I don’t think they see that much value in Rust at the moment, and I think the Rust crowd might need to hire some MBAs if they want to expand their presence in the kernel.





  • Sure, I’ll do another mini-rant.

    I have no idea what real world threat model and threat actor the Wayland people are going for. A threat actor with code execution on a Linux desktop immediately has access to the filesystem and can do whatever anyway, in practice (see also: Steam deleting home directories). Privilege Escalation is a thing and namespaces in Linux are kinda meh. Run your untrusted code in an ephemeral VM.

    My point is just that once you have a threat actor running code on your system, it’s game over regardless of whatever your desktop tries to do. (I’ll run with the Maginot Line comparison here, but Wayland is more like a locked door without walls.)

    The security issues with X were the X-Forwarding-stuff being kinda bad, not the ”full access to everything”-stuff. I want my applications to access my things, otherwise I wouldn’t run the application.

    If your threat model seriously needs sandboxing, you’ll wanna go the Qubes-route. Anyways, Arcan seems to have a more reasonable threat model than Wayland if you wanna go that route.

    Thanks for reading my yearly mini rant on why Wayland’s security don’t matter and only gets in the way of the user and application developer.


  • So this is my big issue with Wayland - nothing is a ”Wayland problem”. Everything lands on the compositors. Features that existed for the past few decades in X and are deeply integrated into the ecosystem were relegated to second class citizens or just ignored. (Can we share our screens with Zoom yet?)

    I won’t argue that X is flawless or should live forever. X should die. However, X actually solved problems instead of just providing a bunch of (IMHO) half baked ”protocols” so that someone else can solve the problem. From the perspective of a user or application developer, that’s just hot potatoes being passed around. And there have been plenty of hot potatoes the past decade.

    Thank you for reading my yearly Wayland rant. I’ll now disappear into my XMonad-fueled bliss, fully software rendered.