• 0 Posts
  • 2 Comments
Joined 1 year ago
cake
Cake day: October 11th, 2023

help-circle
  • Yeah, containerization does make it much easier to just throw away the base system and start fresh. This way, you don’t have to worry about possibly straying the recommended upgrade path and accidentally breaking something.

    More code adds complexity, complexity leads to more bugs, more bugs means more vulnerabilities. Virtualization takes a lot of code. With all this extra code, it is possible that you are actually expanding the attack surface instead.

    It is likely inconsequential for most people just running a couple personal services at home, but organizations are pretty frequently targeted by sophisticated attacks, where the consequences of a breach can be severe.

    Yes, many of these vulnerabilities are difficult to exploit, either requiring local access or the existence of another vulnerability to achieve local access.

    However, there also exists a massive market segment whose entire business model relies on selling local access to VM compute resources, cloud server providers. An attacker could simply rent a VM on a vulnerable platform to gain the needed local access, launch an attack on the host and thereby compromise the other guests on the same machine.

    There have been an incredible number of flaws found and fixed (for now) in the isolation provided by virtual machines. VMware had a spat of critical vulnerabilities in 2024.


  • Yes, it matters.

    Also, the actual isolation of container environments varies greatly, on a per container basis. Containers are far less isolated than virtual machines, and virtual machines are less isolated than separate hosts.

    Neither containers or VMs will will protect from attacks on the host, see regreSSHion. You may be able to limit access to your host by using containers or VMs, but container escapes and VM escapes are not impossible.

    There is much time and effort required to maintain each of these layers. With “stable” distros like Debian, It is often the responsibility of the distribution to provide fixes for the packages they provide.

    Given Debian as the example, you are relying on the Debian package maintainer and Debian security team to address vulnerabilities by manually backporting security patches from the current software version to whatever ancient (stable) version of the package is in use, which can take much time and effort.

    While Debian has a large community, it may be unwise to use a “stable” distro with few resources for maintaining packages.

    OTOH, bleeding edge distros like Arch get many of their patches directly from the original author as a new version release, placing a lower burden on package maintainers. However, rolling releases can be more vulnerable to supply chain attacks like the XZ backdoor due to their frequent updates.