#724 Request for host (xen guest?) for prototype instance of "rpmgrok"
Closed: Fixed None Opened 15 years ago by dmalcolm.

I'm looking for hosting for rpmgrok (see https://fedorahosted.org/rpmgrok/ ), a TurboGears app.

I need a public-facing host (or xen guest) to install it on, initially as a proof-of-concept. I hope to use this to demonstrate the project and demonstrate benefits to Fedora from it (and thus justify further hardware).

Not sure of specs needed. The prototype machine will be doing triple-duties as:
- database (postgres; in my own tests so far I've had 70 million rows in the largest table, so this could grow large)
- web server (TurboGears, talking to database)
- worker (performs jobs for the server, unpacking RPMs, running scripts on the payloads, and uploading results to server via XML-RPC)

(the code is architected so that these roles can be split between separate machines, with multiple workers, but that's for the future).

For a proof-of-concept, I'm guessing the following would suffice:
- 1 GB of RAM (though more would be better)
- 30 GB of disk space (ditto)
- RHEL-5 OS
- virtualized hw is ok
- SSH based on my FAS account

I would handle the rest of the install/config (though interested volunteers would be very welcome!)

Thanks


Some questions to answer

  1. Whats this for?
  2. Who's the target user?

You'll need a sponsor.

What's it for?
- tracks package info that isn't easy to get at from just RPM metadata e.g. the symbol dependency graph, metrics on source data, key/value pairs from .desktop files etc.

Has some interraction with package database, but they don't overlap, and given the differences in db usage patterns I think it makes sense to keep them separate.

Who's the target user:
- Fedora package maintainer
- what's using my packages? how are they using them? Will it matter if I change this interface?
- what are the packages that my packages are using? ditto
- QA/release-engineering
- track bugs per line-of-code across the entire distro
- how many lines-of-code does each maintainer maintain?
- what binaries were statically linked against libz?
- what are all the setuid binaries in the entire distro?
- find all rpmlint warnings for the entire distro
- Marketing:
- look how big Fedora is
etc

Sponsorship: am CCing lmacken and a.badger hoping for sponsorship

I'll sponsor this.

I'm setting up a Xen guest, publictest7 with 1GB RAM and 40GB of disk. Let me know if we need to expand those values.

Okay, ssh dmalcolm@publictest7.fedoraproject.org should work.

You should have sudo on the box.

Feel free to ping me if there's more that needs to be done!

publictest7.fedoraproject.org is not responding to pings, presumably taken down as part of recent infrastructure work.

I'm working on a new version of rpmgrok that fixes many of the issues of the original release and would like a public test instance again (assuming that you've moved on from fire-fighting to longer-term work like this; the code isn't quite ready yet)

I have a backup of the database, so nothing on publictest7 needs saving (in fact, I'm making major schema changes, so am likely to simply regenerate the data)

BTW, the original publictest7 appeared to be running with SELinux disabled. I would prefer it if SELinux were enabled on the box.

on #fedora-admin on 2008-09-09:
<mmcgrath> dmalcolm: see if you can use pt15 - https://www.redhat.com/archives/fedora-infrastructure-list/2008-August/msg00200.html

Hey, I noticed that rpmgrok isn't running on publictest15 yet. You should have an account there and sudo access. Do you need help getting rpmgrok up and running?

publictest15 is a shared box but currently has 4GB of RAM and no one else has started using it yet. It has 44GB of free disk but the xen host has 357GB free. It will require downtime to add extra disk but we can add it when it becomes necessary.

In an email I got from spot, he's requested an additional 75G, which comes to 105G. It's building now, it will be at publictest14.fedoraproject.org/80.239.156.210 I'll close the ticket when it is done.

pt14 has been around awhile now.

Login to comment on this ticket.

Metadata