Hi, I am the developer of PdfDing. One thing I am not sure about is the frequency of my releases. What do you folks prefer in self-hosted projects? More releases in order to get new features as fast as possible or fewer releases with bigger feature additions?
This isn’t something that should really be set by users of an app. It should be set by you, as you will be the one to handle user feedback and bug reports.
That being said, bigger releases are a challenge from a debugging report standpoint because you are introducing many more changes in each release compared to a smaller number of charges in more frequent releases. This is why many devops teams in corporate land try to keep releases smaller and more frequent (see also: Agile Development)
Very good point about Agile.
As an end-user (that is, the IT staff that will be deploying/managing things), I prefer less-frequent releases. I’d love to see 1 or 2 releases a year for all software (pipe dream, I know). Once you have a handful of packages, you end up with constant change to manage.
I suspect what we end up with is early adopters embracing the frequent releases, and providing feedback/error reporting, while people like me benefit from them while choosing to upgrade less frequently.
There are about 3 apps that I’m a beta tester for, so even I’m part of that early-adopter group.
As an end-user (that is, the IT staff that will be deploying/managing things), I prefer less-frequent releases. I’d love to see 1 or 2 releases a year for all software
The hard floor for release frequency must always be “as security issues are fixed”, and those will rarely be infrequent in our current environment of ever-shifting dependencies.
If your environment is struggling to keep up with patching, you need to analyze that process and find out why it’s so arduous.
As an example, I took a shop from a completely manual patch slog 10 years ago to a 97% never-touch automated process. It was hard with approvals and routines, but the numbers backed me up. When I left 2 years ago, the humans had little to do beyond validation.
The sad news is, the great loss of mentors after Y2K will be seen again after RTO, and we’re not going to fix the fundamental problems that enable longer release cycles in a safe way; and so shorter update cadence will be our reality if we want to stay safe …
… and stay bleeding-edge. Shifting from feature-driven releases to only bugfix-driven releases means no churn for features, but that’s a different kind of rebasing. It’s the third leg of the shine-safe-slack pyramid; choose 2.
I prefer rarer, bigger, more tested updates, since I don’t pull the updated docker containers that regularly.
From experience shipping releases, “bigger updates” and “more tested” are more or less antithetical. The testing surface area tends to grow exponentially with the amount of features you ship with a given release, to the point I tend to see small, regular releases, as a better sign of stability.
“Bigger” is a bit missleading here. Really bigger updates obviously require a major version bump to signify to users that there is potential stability or breakage issues expected.
But “bigger” in the other sense i.e. meaning slower, means that there was more time for people to run pre-release versions if they are adventurous and thus there is better testing.
Of course this assumes that there are actual beta testers and that it is easy to do so by creating such beta releases.
Really bigger updates obviously require a major version bump to signify to users that there is potential stability or breakage issues expected.
If your software is following semver, not necessarily. It only requires a major version bump if a change is breaking backwards compatibility. You can have very big minor releases and tiny major releases.
there was more time for people to run pre-release versions if they are adventurous and thus there is better testing
Again, by experience, this is assuming a lot.
Well usually the opposite happens. People make many releases and outsource the testing to unsuspecting users.
This is IMHO fine if you clearly mark these releases as release candidates or such, so that people can make their own risk judgement. But usually that isn’t the case and one minor version looks like any other unless you have a closer look at the actual changes in the code.
As long as updates work between any two versions, or there’s a clear upgrade path, I don’t really care. I don’t update my services on any particular schedule, so it doesn’t matter much to me.
However, you should have a mechanism to inform users of important updates, like patches to known exploits. Don’t spam me, but a nudge if I’m outside of some support window will probably get me to upgrade.
My upgrade cadence is probably every 3-6 months. I’ll do system upgrades more often, but I try to avoid breaking my docker stuff.
I’ve been maintaining multiple release channels for most of my projects. I always have a nightly build and a dev build that I run manually or on every push. Actually versioned releases either happen directly after completing a milestone or when the release schedule calls for it.
Security and bugfixes, after one or two rounds of testing by early adopters/key users. Preferably through some form of automatic updates.
New features and breaking changes, or anything that requires the end-user to pay attention, I’d say no more than 4 times a year, and using a non-automatic form of update. The hard thing is getting the user’s attention on the changes, and not just clicking next and then having a broken or insecure installation.
Most of my updates are automated so I don’t even notice. Release whenever you think it’s appropriate. Fixed a typo? Not worth a release. Critical security issue? Release immediately.
My background is in enterprise software, so that is obviously different than a desktop tool for individual use, but it informs my opinions.
In general it depends on the use (is it “production” critical, etc) as well as the update and distribution mechanisms.
I have several (mostly for windows) FOSS projects i have stopped using or just rarely update because they require too many steps to update, and/or do so too often.Or they require a reboot. Some of them prompt for an update every time I start them. Feh.
That said, if there isn’t much friction like testing cycles or manual steps to update, I want faster updates.
Most of my self-hosted stuff falls into the category of getting updates via package managers or docker. Those are often seemless and do not require manual steps.
Less releases with more testing. I normally wait some time before upgrading (no matter the application) as I prefer stability over extra/new features that I may use or not.
I think the better option is to have many releases that are clearly marked as beta-test releases or release candidates for those that don’t mind testing stuff.
I personally prefer consistent and smaller releases. It offers less opportunity for big bugs to creep in along with smaller fixes and features.
I saw agile mentioned here but here’s another suggestion. Agile can be helpful in the right situations but for solo devs/tiny teams, I really recommend looking into Basecamps “Shape Up” method. It uses longer cycles vs shorter sprints with a cool down period in between.
So in the case of OP, they could set a 6 week cycle and plan for things that can definitely be completed during that time period. Right at the end of the cycle you release. The goal is to finish before the cooldown to give yourself time to breathe and plan what to do for your next cycle. Play around with a fun feature, learn about a new tool or technique you wanna try, organizing your backlog, etc. You don’t want to spill tasks into the cooldown. Else it’s not a cooldown.
The online version of the Shape Up book is free and can be found here.