#1495 Significant amount of updates when a Fedora release is made GA
Closed None Opened 8 years ago by jkurik.

Hi FESCo,

there is a thread on devel@ mailing list [1] discussing the fact of having significant amount of updates (500+ for F23) ready to be applied at the GA date of a Fedora major release, just after installation. There are several proposals in the discussion trying to minimalize the number of these updates.

[1] https://lists.fedoraproject.org/pipermail/devel/2015-November/216284.html

As I consider some of these proposals interesting and I believe this is a FESCo field, I am opening this ticket to have a discussion on FESCo meeting, to clarify whether we would like to do something with the current state or whether we are fine with the state as it is now.

So, here is a summary of the discussion from devel@ mailing list:
== Proposal 1 ==
[link] https://lists.fedoraproject.org/pipermail/devel/2015-November/216289.html [[BR]]'''Summary:''' In case of a slip, pull in ALL updates pending stable and restart the release process (freeze, spin RCs, test) from there.

== Proposal 2 ==
[link] https://lists.fedoraproject.org/pipermail/devel/2015-November/216292.html [[BR]]'''Summary:''' Introduce a third party responsible for giving final approval on all updates, charged with reducing the size of the updates pipe dramatically.

== Proposal 3 ==
[link] https://lists.fedoraproject.org/pipermail/devel/2015-November/216292.html [[BR]]'''Summary:''' Introduce service packs (or "update packs") -- all non-security updates get bundled together, QAed, and released every 4-8 weeks. (jkurik's note: for people using RHEL - this is the approach introduced in RHEL-7.1 and RHEL-6.7 called "Batch updates").

Regards,
Jan


== Proposal 4 ==
[link] https://lists.fedoraproject.org/pipermail/devel/2015-November/216300.html [[BR]]'''Summary:''' Leave it as it is as deltarpms should mitigate most of the download cost. Alternatively, netinstall ISO image can be used instead and then the newest versions of packages in the repositories will be downloaded on-demand during the install time.

Replying to [ticket:1495 jkurik]:

Hi FESCo,

there is a thread on devel@ mailing list [1] discussing the fact of having significant amount of updates (500+ for F23) ready to be applied at the GA date of a Fedora major release, just after installation. There are several proposals in the discussion trying to minimalize the number of these updates.

Oh good. We get to revisit this again. This is my favorite part of the cycle, where people propose the same proposals others have already proposed as if they are the only people to have thought of these ideas.

== Proposal 1 ==
[link] https://lists.fedoraproject.org/pipermail/devel/2015-November/216289.html [[BR]]'''Summary:''' In case of a slip, pull in ALL updates pending stable and restart the release process (freeze, spin RCs, test) from there.

This has been proposed numerous times in the past and we've consistently chosen not to do this at the advice of both rel-eng and QA. I don't believe we have any new arguments here.

== Proposal 2 ==
[link] https://lists.fedoraproject.org/pipermail/devel/2015-November/216292.html [[BR]]'''Summary:''' Introduce a third party responsible for giving final approval on all updates, charged with reducing the size of the updates pipe dramatically.

This would fall to enforcement and overriding maintainers, two things we traditionally try and avoid. It is worth discussing (again), but there will be both backlash and a significant time investment on the part of the 3rd party to consider.

== Proposal 3 ==
[link] https://lists.fedoraproject.org/pipermail/devel/2015-November/216292.html [[BR]]'''Summary:''' Introduce service packs (or "update packs") -- all non-security updates get bundled together, QAed, and released every 4-8 weeks. (jkurik's note: for people using RHEL - this is the approach introduced in RHEL-7.1 and RHEL-6.7 called "Batch updates").

This was proposed by Tom Callaway (or a very similar version of it) back at FUDCon Blacksburg. The issues here are that our QA team and process cannot scale to accomplish this from what I understand. That team takes the time after GA to focus on their testcase and infrastructure improvement in the hopes that marathon hero testing close to GA isn't necessary. That isn't the case yet, but it does seem to be improving.

So we have manpower and coordination issues with this proposal. Also, this won't reduce the download size or types of the updates, just the frequency. It is "one-big-lump" vs. "constant-stream".

Replying to [comment:2 jwboyer]:

This was proposed by Tom Callaway (or a very similar version of it) back at FUDCon Blacksburg. The issues here are that our QA team and process cannot scale to accomplish this from what I understand. That team takes the time after GA to focus on their testcase and infrastructure improvement in the hopes that marathon hero testing close to GA isn't necessary. That isn't the case yet, but it does seem to be improving.

Just simply batching things and not doing any special testing on the batch wouldn't be more QA load, would it? It's only more QA if we promise that the batch is tested better than it is now. But it seems to me that even without that testing, it's at not worse, either. So, it can be one step forward, and then QA can be built up around that, moving us more steps.

So we have manpower and coordination issues with this proposal. Also, this won't reduce the download size or types of the updates, just the frequency. It is "one-big-lump" vs. "constant-stream".

It'd be nice to have some statistics. If some things are revving very frequently, it might actually reduce the total.

Replying to [comment:3 mattdm]:

Replying to [comment:2 jwboyer]:

This was proposed by Tom Callaway (or a very similar version of it) back at FUDCon Blacksburg. The issues here are that our QA team and process cannot scale to accomplish this from what I understand. That team takes the time after GA to focus on their testcase and infrastructure improvement in the hopes that marathon hero testing close to GA isn't necessary. That isn't the case yet, but it does seem to be improving.

Just simply batching things and not doing any special testing on the batch wouldn't be more QA load, would it? It's only more QA if we promise that the batch is tested better than it is now. But it seems to me that even without that testing, it's at not worse, either. So, it can be one step forward, and then QA can be built up around that, moving us more steps.

The proposal called for a curated and QA set of updates. So if we aren't going for a curated "service pack", I don't think we should call it a service pack. It could be a "monthly update bundle" or something. Also, that doesn't actually solve the "OMG half a gig of updates" problem (in fact it make it slightly more frequent), and it also overrides maintainer expectations.

IIRC, Tom's original proposal had different "channels" or levels of updates. There was the firehose, which is what we have today. Then there was a monthly level which is more like a bundle you're talking about. Then there was a point-release, which is what proposal 3 was suggesting. All of them have downsides, and each level is somewhat more work to accomplish.

Again, I'm not suggesting we shouldn't revisit it, but we also shouldn't pretend we haven't already talked about these ideas several times.

So we have manpower and coordination issues with this proposal. Also, this won't reduce the download size or types of the updates, just the frequency. It is "one-big-lump" vs. "constant-stream".

It'd be nice to have some statistics. If some things are revving very frequently, it might actually reduce the total.

The problem with all of these proposals is that maintainers do whatever they want with their package updates and always have. Some of them actively want to keep things as up-to-date as possible. Some of them simply file fire and forget updates in the hopes that it keeps the bug count down (or simply because a new release existed). There's no way to gather statistics to separate all the cases of why something was revved.

Replying to [comment:4 jwboyer]:

The proposal called for a curated and QA set of updates. So if we aren't going for a curated "service pack", I don't think we should call it a service pack. It could be a "monthly update bundle" or something.

Agreed.

IIRC, Tom's original proposal had different "channels" or levels of updates. There was the firehose, which is what we have today. Then there was a monthly level which is more like a bundle you're talking about. Then there was a point-release, which is what proposal 3 was suggesting. All of them have downsides, and each level is somewhat more work to accomplish.

The point-release proposal was separate -- that was actually replacing the current six month releases, and then having full releases every couple of years. (I still think that's an interesting idea too... but separate, and also a lot more controversial.)

Again, I'm not suggesting we shouldn't revisit it, but we also shouldn't pretend we haven't already talked about these ideas several times.

Yeah -- no need to start the discussion from scratch every time. :)

It'd be nice to have some statistics. If some things are revving very frequently, it might actually reduce the total.
The problem with all of these proposals is that maintainers do whatever they want with their package updates and always have. Some of them actively want to keep things as up-to-date as possible. Some of them simply file fire and forget updates in the hopes that it keeps the bug count down (or simply because a new release existed). There's no way to gather statistics to separate all the cases of why something was revved.

Well, we do at least have bugfix / enhancement / security from the updates -- not foolproof, but at least some separation.

I don't think any of the proposals (except keep doing what we are doing) are tenable without a great deal more work/details.

I'm open to changing our process, but I think this ticket is premature. The thread on the devel list was started yesterday and we haven't even finished releasing f23 yet, leading to lots of people who might want to chime in not having done so yet.

At today's fesco meeting, none of the first three proposals were considered especially viable, leaving the status quo of proposal 4 as the default. Further proposals will be happily considered, but for now there's nothing actionable for fesco here.

Login to comment on this ticket.

Metadata