Ticket #76 (closed enhancement: fixed)

Opened 4 years ago

Last modified 4 years ago

Tests grouping method and install source/repository cases

Reported by: rhe Owned by: rhe
Priority: major Milestone: Fedora 14
Component: Wiki Version:
Keywords: Cc: jlaska,kparal,robatino
Blocked By: Blocking:

Description

problem

Currently, most of the install source cases only described the 'askmethod' way to specify source, the 'repo=' one was ignored.

analysis

Therefore, I suggest adding this method in the steps so that tester can choose any of them.

enhancement recommendation

I've updated install source cases by offering testers two methods when executing install source cases. This way the 'repo=' case can be covered. Feel free for comments. Thanks.

Change History

comment:1 Changed 4 years ago by jkeating

The problem I see here is that we would like to ensure that both code paths are used, that is the code path that is handled by "askmethod" and the code path that is handled by "repo=". These goals can be accomplished while also testing the act of installing over an http repo or an nfs repo or whatever. So in our matrix we need to ensure that both code paths to configure said repos are used.

comment:2 follow-up: ↓ 4 Changed 4 years ago by jlaska

Hurry, I really like that you are working to eliminate the lack of specificity+clarity in some of the tests. As Jesse points out, many of the test permutations map to specific use cases. With our current matrix, I find it hard to identify why specific tests are needed and how they impact different install use cases.

While this might not be specific to this ticket, I wonder if it would be helpful to group the tests much like Andre suggests in the F13 QA retrospective. For me, this helps clarify what use case is covered by different tests, helps clear up the setup conditions for a particular test, and has the benefit of lining up with Liam's install automation efforts.

What do you think about some type of matrix test grouping like the following?

Installation using boot.iso - All tests that are specific to booting and installing with a boot.iso.

Installation using DVD - All tests that are specific to booting and installing with a DVD.

Installation using CD - All tests that are specific to booting and installing with a CD.

  • Basically same as above - replace DVD tests with the CD versions

Installation using Live - All tests that are specific to booting and installing from a Live image.

  • Basically same as above

Installation using PXE images - All tests specific to booting the PXE images

  • Basically same as above
    • No mediakit tests
    • No local stage2/repo tests

General tests - Anything not yet specified, or tests that can be performed independently of boot and installation source methods.

  • Installation sources - hd, hdiso, nfs, nfsiso
  • Partitioning - a reduced set of partitioning tests
  • Kickstart delivery
  • Package set - default, minimal (if desired)
  • Recovery tests
    • updates=
    • traceback handling
  • Upgrades
  • User Interface - VNC, text-mode, cmdline, telnet (if still available)
  • Storage devices - a reduced set (likely dmraid and iscsi)

What I like about the "idea" above

  • Clearly articulates which test cases are dependent on different media. And also which tests are not dependent on media.
  • Tighter association between test and use case - When testing a DVD, there is a clear list of tests that are specific to just the DVD. If a failure happens, it's more apparent how that failure would impact the other tests, and the release criteria

Concerns:

  • What would this look like on the wiki? Multiple matrices/tables? Different wiki page for each use case?
  • Can we remove enough duplicate or unessential tests to offset any additional work required to tracking these new tests?

comment:3 follow-up: ↓ 5 Changed 4 years ago by robatino

Haven't thought about it much, but the Test Matrix wiki table is sortable by column, though it appears no attempt has been made to use this feature - see

http://en.wikipedia.org/wiki/Help:Sorting

Entries in several of the columns could be set up so that sorting by any of those columns is useful. For example, if each of the above groups has a common prefix in one of the columns, then sorting by that column would display each of these groups in consecutive rows. Right now the default ordering of the rows (the one corresponding to the order they appear in the source code) roughly corresponds to ordering by the Release Level column (though sorting causes the "Upgrade system/QA:Testcase_Preupgrade_from_older_release" row to move to the top, which is wrong).

comment:4 in reply to: ↑ 2 Changed 4 years ago by rhe

Thanks for all your suggestions, will consider them while redesign the matrix.

Replying to jlaska:

<skip>

I like this idea! It's a bit frustrating when one finished a whole installation but have to fill in results for different separated cases. I know auto-install testing is grouped in this way, so it's also good to keep the manual test conform to it.

Concerns:

  • What would this look like on the wiki? Multiple matrices/tables? Different wiki page for each use case?

At a first thought, I would like to arrange the above groups in a matrix, then put the cases for each group in different pages shown after clicking each group. But Andre's advice is also a good idea. I need to think more about this and make some tests.

  • Can we remove enough duplicate or unessential tests to offset any additional work required to tracking these new tests?

Well, I think if the grouping scheme changed, removing some duplicate or unessential tests is inevitable, but at the first time, it's better to remove such cases as less as possible to guarantee the testing quality. I will start preparing this grouping scheme soon and work the details.

Thank you!

comment:5 in reply to: ↑ 3 Changed 4 years ago by rhe

Replying to robatino:

Haven't thought about it much, but the Test Matrix wiki table is sortable by column, though it appears no attempt has been made to use this feature - see

http://en.wikipedia.org/wiki/Help:Sorting

Entries in several of the columns could be set up so that sorting by any of those columns is useful. For example, if each of the above groups has a common prefix in one of the columns, then sorting by that column would display each of these groups in consecutive rows. Right now the default ordering of the rows (the one corresponding to the order they appear in the source code) roughly corresponds to ordering by the Release Level column (though sorting causes the "Upgrade system/QA:Testcase_Preupgrade_from_older_release" row to move to the top, which is wrong).

AFAIK, sorting can change the order of the appearance of the table by column, but the codes themselves would not really change according to it. So it doesn't help the testers editing the matrix.

comment:6 follow-up: ↓ 7 Changed 4 years ago by rhe

I've made a very very draft page regarding to this grouping idea.

I know it's far from acceptable, and there are many separated tables. But that's my initial idea so far. Below are some basic points in my design:

  1. Tests are divided into six categories: boot.iso, DVD, CD, Live, PXE, and General test.
  1. I put all matrices in the same page so that it's easy to watch the results.
  1. I set partition, user interface, and variations as shared contribution matrices for different media installation.
  1. I want to order the cases by Text Area Column instead of priority. It's easier for testers to edit. And one can still sort the results by priority column.


Please take a look at it and any suggestion is welcomed! Thanks! m( =∩王∩= )m

comment:7 in reply to: ↑ 6 ; follow-up: ↓ 8 Changed 4 years ago by jlaska

Replying to rhe:

Please take a look at it and any suggestion is welcomed! Thanks!

Nice mock-up Hurry! This helps visualize the proposed changes. I like the focus on 'usage scenarios'. I'm not sure if that's the right term, but what I mean is the six categories you've provided: boot.iso, DVD, CD, Live, PXE and General tests. Anything that affects the outcome of those usage scenarios, will be listed in the appropriate section. Everything else goes into the bucket as a 'variation'. Seems sensible.

With the many different wiki tables, in some ways, it looks similar to how we used to track test results on the wiki (http://fedoraproject.org/wiki/QA/TestResults/Fedora9Install/FinalRelease and http://fedoraproject.org/wiki/QA/TestResults/Fedora10Install/Final). However, with your proposal, there is a much better integration between the usage scenario (or use case) and the test. That seems like a positive change!

With regards to the table display format, I can't think of any immediate benefits using the different approaches. There is only so much we can do with wiki tables, right? Out of curiousity, I copied your draft and created 2 new pages to help me visualize how this might look under different approaches:

  1. Using a table for each variation -- https://fedoraproject.org/wiki/User:Jlaska/Draft
  2. Using a single table, with variation column -- https://fedoraproject.org/wiki/User:Jlaska/Draft3 -- note, I also removed the References column here. I'm curious if that helps us avoid duplicate <ref> tags in future wikis.

What do you think? Is this any better or worse?

comment:8 in reply to: ↑ 7 ; follow-ups: ↓ 9 ↓ 10 Changed 4 years ago by rhe

Replying to jlaska:

Replying to rhe: <skip> With regards to the table display format, I can't think of any immediate benefits using the different approaches. There is only so much we can do with wiki tables, right? Out of curiousity, I copied your draft and created 2 new pages to help me visualize how this might look under different approaches:

  1. Using a table for each variation -- https://fedoraproject.org/wiki/User:Jlaska/Draft

The advantage of this way is that the number of matrices are less than that in my method, and the division is very clear and comfortable: boot, DVD, CD, live, pxe and variations.

But consider if a general tester just finished a whole installation say by boot.iso, it is convenient for him to only edit boot.iso matrix or at least know where to add all the results directly from that matrix. That's why I added entries like partitioning and variations in the boot.iso matrix, though it's not a good way for displaying. Also, I think the 'variations' part contains too much so that most of the time testers have to contribute in this matrix.

  1. Using a single table, with variation column -- https://fedoraproject.org/wiki/User:Jlaska/Draft3 -- note, I also removed the References column here. I'm curious if that helps us avoid duplicate <ref> tags in future wikis.

It's similar to the method above but to integrate them into one table. I think it's cool.:) but some cases like QA:Testcase_Mediakit_ISO_Size shown many times in one table seem a little bit weird, especially when one sorts the cases by column .:)

I like the way you changed 'reference' part, very creative.:) I added some results on your draft, and you can see that reference 1 and 2 are the same using the key format. I think it's better if these two can be combined into one reference like reference 3.

I'll continue thinking about other approaches of this grouping idea. Anyone please feels free for advice. Thanks.

comment:9 in reply to: ↑ 8 Changed 4 years ago by jlaska

Replying to rhe:

I like the way you changed 'reference' part, very creative.:) I added some results on your draft, and you can see that reference 1 and 2 are the same using the key format. I think it's better if these two can be combined into one reference like reference 3.

Good suggestion. I modified Template:Result to achieve the desired result. That should now reuse references.

comment:10 in reply to: ↑ 8 ; follow-ups: ↓ 11 ↓ 12 Changed 4 years ago by jlaska

Replying to rhe:

I'll continue thinking about other approaches of this grouping idea. Anyone please feels free for advice. Thanks.

Inspired by the format AdamW uses for Test Days, I was curious how it might look to have 2 large tables.

  1. A table listing all general tests where the media used is not critical. Basically the same format we used for F-13.
  2. Another table listing the tests that are specific to different media. I'm adopting a similar format to that used in test days,

For a quick example of table#2, checkout User:Jlaska/Draft#Another_Approach

comment:11 in reply to: ↑ 10 Changed 4 years ago by rhe

Replying to jlaska:

Replying to rhe:

I'll continue thinking about other approaches of this grouping idea. Anyone please feels free for advice. Thanks.

Inspired by the format AdamW uses for Test Days, I was curious how it might look to have 2 large tables.

  1. A table listing all general tests where the media used is not critical. Basically the same format we used for F-13.
  2. Another table listing the tests that are specific to different media. I'm adopting a similar format to that used in test days,

For a quick example of table#2, checkout User:Jlaska/Draft#Another_Approach

I thought about this approach before when I recalled Is_anaconda_broken_roadmap, I feel comfortable about what it looks like, but one big problem is that editing the results would become more difficult for testers, since there are 10 frames to be filled in each row. Do you think the testers can adapt it?

Besides, I further divided this approach into two: App1 and App2. As you can see, App1 combined all test cases of one text area into "one case", which makes the table more brief, while App2 (unfinished) seems too grey and like a puzzle since many times only one media is suitable for a case.

comment:12 in reply to: ↑ 10 Changed 4 years ago by rhe

I reverse the matrix of User:Jlaska/Draft#Another_Approach, and got:

The advantage of this method is that each row is a complete installation for each media, so it is easy for testers to edit results continuously after they finish an install.

comment:13 follow-up: ↓ 14 Changed 4 years ago by rhe

It would be convenient if we could have collapsing tables on wiki, like:http://en.wikipedia.org/wiki/Help:Collapsing. But the js file is needed and I don't know if it is allowable on fedora wiki. Does anyone know if there is any way to achieve it?

comment:14 in reply to: ↑ 13 Changed 4 years ago by rhe

Replying to rhe:

It would be convenient if we could have collapsing tables on wiki, like:http://en.wikipedia.org/wiki/Help:Collapsing. But the js file is needed and I don't know if it is allowable on fedora wiki. Does anyone know if there is any way to achieve it?

I sent out this question to wiki@ and websites@ list, and thanks to Ian and Liam, the collapsing tables code in Common.js has been added to fedora wiki. Haha, what a nice function. Thanks.

comment:15 Changed 4 years ago by rhe

Based on collapsing tables, I further created two drafts: draft1, and draft2.

For draft1, I'm pretty satisfied with the appearance, but I also have two main concerns about it:

  • 1. It's not sortable since sorting icons cannot be placed on second header row of a table.(I've read some references about it and also ask it on #mediawiki)
  • 2. Editing could be tough if the tables put together like this.(Put tables to subpages can let each one have an [edit] link but it just becomes more complex.)

Therefore, I created draft2. Though it's not as pretty as draft1, the above problems are solved.

Btw, besides matrix, the other parts like "key" can also become collapsible.

How does anyone think of these two pages? Thanks!

comment:16 follow-up: ↓ 17 Changed 4 years ago by jlaska

Great, nice work on the collapsible tables. I really like draft2, for the same reasons you note above. Having multiple tables for each usage scenario seems much more straight forward than the over complicated layout proposed at User:Jlaska/Draft#Another_Approach.

Some minor questions I was trying to answer, not sure if anyone else has ideas

  • Is there a way to have a "Expand/Collapse? all"?
  • I wonder if there a way to have a overall test status for each group even when the table is collapsed? I can't find any hints on solutions, other than devising custom css rules.

Regardless, this seems to satisfy the need for organizing the tests by usage scenario, without cluttering up the wiki too much. I like it!

comment:17 in reply to: ↑ 16 Changed 4 years ago by rhe

Replying to jlaska:

Great, nice work on the collapsible tables. I really like draft2, for the same reasons you note above. Having multiple tables for each usage scenario seems much more straight forward than the over complicated layout proposed at User:Jlaska/Draft#Another_Approach.

Some minor questions I was trying to answer, not sure if anyone else has ideas

I asked this to mediawiki and websites channels, but got no response. It seems not an easy question to answer.:)

  • I wonder if there a way to have a overall test status for each group even when the table is collapsed? I can't find any hints on solutions, other than devising custom css rules.

I think so. Will do more research about these two ideas when I have time.

Regardless, this seems to satisfy the need for organizing the tests by usage scenario, without cluttering up the wiki too much. I like it!

Liam is also happy with it. Then I'm gonna adopt this style into F14 template. Thank you!

comment:18 follow-up: ↓ 19 Changed 4 years ago by rhe

Back to the topic, after this new grouping method, I find it's hard to define installation source tests.

First, install source generally contains two parts: 1. Install.img (stage2=) 2. package repository (repo=). So far we have no specific tests focusing on each of them, but we have: 1. Install source tests (askmethod to specify one source for both install.img and package repo) 2. Additional repository (add another package repo: http, nfs, CD/DVD and mirrorlist).

Initially I planed to create new install.img source tests(stage2=) and modify existing install source tests to package repository tests(repo=). But then I don't know how to name the "askmethod=" tests. Besides, there are already 9 install source tests(askmethod), so I don't think it's nice and realistic to add 9 install.img source tests and 9 package repository tests based on these.

Currently the "stage2=" kernel parameter is not executable and supported (Liam reported a bug regarding it), so it's not very useful to separate install.img tests from install source. Therefore, my suggestion is to keep the old install source tests (askmethod=), and modify the Additional repo tests to make it use either repo= or manual repo adding method.

Please refer to draft template for details. Thanks.

comment:19 in reply to: ↑ 18 ; follow-up: ↓ 20 Changed 4 years ago by jlaska

Replying to rhe:

Currently the "stage2=" kernel parameter is not executable and supported (Liam reported a bug regarding it), so it's not very useful to separate install.img tests from install source. Therefore, my suggestion is to keep the old install source tests (askmethod=), and modify the Additional repo tests to make it use either repo= or manual repo adding method.

Please refer to draft template for details. Thanks.

Good suggestion, I like this idea.

The only addition I'd offer is including a repository test in each install usage scenario (boot.iso, pxe, cd, DVD, live). This test would confirm that the default repository method works for each install usage scenario. I've also added a network-based repository test to the CD and DVD usage scenarios, since enabling the network on physical media installs is often problematic.

I've updated my draft to provide an example. I'm not sure I have the default repository correct for the boot.iso (repo=http) and pxeboot (repo=mirrorlist) cases, but hopefully it conveys the idea.

comment:20 in reply to: ↑ 19 ; follow-up: ↓ 21 Changed 4 years ago by rhe

Replying to jlaska:

<skip>

The only addition I'd offer is including a repository test in each install usage scenario (boot.iso, pxe, cd, DVD, live). This test would confirm that the default repository method works for each install usage scenario.

Thanks James, I see what you mean, but the current install source tests have already covered default repository tests.

For InstallSourceDvd, CD, live image, if one follows its steps, all the sources including repo should be from that disk.

For pxeboot, if one uses askmethod= to specify an install source, both install.img and its default package repo would be pointed to that source.

Boot.iso is a little unique, the common installation is to use disk install.img and repo=http://. So I combined InstallSourceBootiso and Testcase Additional Http Repository together in boot.iso usage scenario.

In all, what I mean is that install source(aka askmethod=) includes package repository. Please correct me if I misunderstood anything.

I've also added a network-based repository test to the CD and DVD usage scenarios, since >enabling the network on physical media installs is often problematic.

Yeah, agree.

comment:21 in reply to: ↑ 20 Changed 4 years ago by jlaska

Replying to rhe:

Thanks James, I see what you mean, but the current install source tests have already covered default repository tests.

Oh geez, you are right. I missed that :(

For InstallSourceDvd, CD, live image, if one follows its steps, all the sources including repo should be from that disk.

For pxeboot, if one uses askmethod= to specify an install source, both install.img and its default package repo would be pointed to that source.

Boot.iso is a little unique, the common installation is to use disk install.img and repo=http://. So I combined InstallSourceBootiso and Testcase Additional Http Repository together in boot.iso usage scenario.

In all, what I mean is that install source(aka askmethod=) includes package repository. Please correct me if I misunderstood anything.

I understand now, thanks.

comment:22 Changed 4 years ago by rhe

  • Summary changed from Use two methods to specify install source to Tests grouping method and install source/repository cases

I've removed 'repo=' method from install source, and make it specified to 'askmethod='. Instead of adding 'repo=' in the additional repository tests, I created 3 new tests: 1. QA:Testcase_Http_Repository 2. QA:Testcase_Ftp_Repository 3. QA:Testcase_Nfs_Repository. Please refer to the matrix

Hope this way the tests can be more clear and both code paths are covered as Jess suggested.

comment:23 Changed 4 years ago by rhe

  • Status changed from new to closed
  • Resolution set to fixed

New grouping method has been adopted in F-14 install results template. Besides, install source cases are now specific to "askmethod=", while 3 repository tests are used to test "repo=" method, therefore issues have been resolved, many thanks for the help from jkeating and jlaska. Feel free to reopen this ticket if anything is still improper.

Note: See TracTickets for help on using tickets.