#3700 [RFE] Replicate failed login attribute (krbLoginFailedCount)
Closed: wontfix 3 years ago by abbra. Opened 10 years ago by mkosek.

Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 6): Bug 971087

For some users, failed login attempts need to be replicated throughout the
environment such that a user that has used all their failed logins gets locked
out of the environement instead of just the one IPA server that the failed
logins were on.

Some thoughts about the design:
1. It makes sense to replicate the failure count rather than account lock and make each server in charge of lock as it is now.
2. If we decide to replicate the failure count we need to make sure that this is replicated as fast as possible so a special DS configuration might be required.
3. This feature should be optional and turned off by default

This feature is more a peace of mind feature. We are just asked to reduce the probability of guessing the password. The same can be accomplished by having a password policy that requires a bit more entropy from the password. It is understood that changing password policy might be harder in some organisations thus this RFE is created and considered.

What we can do is to create a plugin that monitors a specific shared configuration tree and change the replication agreements accordingly.
We already discussed in the past about having the actual authoritative agreements information in the replicated tree and then letting a plugin change the agreements accordingly as this would allow us:
a. more flexibility to control ACIs
b. single point of management to change all replication agreements, so topology can be changed from one server w/o having to directly connect to every other server.
c. changes propagate automatically even to servers that are temporarily down (at startup plugin would verify the data and change agreements accordingly)

What is required though is a graph solver to avoid the case where a change will end up causing a split brain situation, and perhaps some way for all servers to report when a change has been made, so that we know we are in a stable state.

Why do we need to change replication agreements for this use feature? I am a bit lost.

Because the list of attributes to not replicate are stored in the replication agreements, in the unreplicated tree cn=config.

Replying to [comment:4 rcritten]:

Because the list of attributes to not replicate are stored in the replication agreements, in the unreplicated tree cn=config.

OK, now it makes more sense. I think I had the similar approach in mind. I would have just worded it differently.

3.4 development was shifted by one month, moving tickets to reflect reality better.

We decided to lower priority of this ticket for 3.4 release, it may get pushed out of 3.4 if not addressed.

Adjusting time plan - 3.4 development was postponed as we focused on 3.3.x testing and stabilization.

Moving unfinished November tickets to January.

This is would require #4302 (scheduled for 4.1) to be completed first. Moving to further release.

I filed #5970 requesting the underlying Topology capability to enable this feature.

Metadata Update from @mkosek:
- Issue assigned to someone
- Issue set to the milestone: FreeIPA 4.5 backlog

7 years ago

Metadata Update from @pvoborni:
- Issue close_status updated to: None
- Issue set to the milestone: Ticket Backlog (was: FreeIPA 4.5 backlog)

6 years ago

Closing as the original bugzilla was closed WONTFIX.

Metadata Update from @abbra:
- Issue close_status updated to: wontfix
- Issue status updated to: Closed (was: Open)

3 years ago

Login to comment on this ticket.

Metadata