The Vagrantfile is configured so that all VMs are created first, and then Ansible runs once at the end with limit = "all".
This means provisioning happens in parallel across all nodes, rather than sequentially, which speeds up the setup .
This project automates the deployment of a FreeIPA identity management cluster consisting of:
- 1 Management Node (
mgmt) - FreeIPA server with integrated DNS - 2 Compute Nodes (
compute1,compute2) - FreeIPA clients - Centralized User Management - HPC users and groups
- Kerberos Authentication - Single sign-on across all nodes
- Integrated DNS - Custom domain resolution (
hpc.lab) - SSSD Integration - Seamless user lookup and authentication
β
Automated Infrastructure - VMs provisioned with Vagrant
β
Modular Ansible Roles - Reusable and maintainable code
β
Production-Ready - Proper firewall, DNS, and security configuration
β
HPC Focused - Pre-configured users and groups for compute clusters
β
Cross-Node Authentication - Users can login to any node
β
Integrated Testing - Comprehensive test suite included
FreeIPA/
βββ Vagrantfile # VM definitions and provisioning
βββ ansible.cfg # Ansible configuration
βββ site.yml # Main playbook orchestrating all roles
βββ requirements.yml # Ansible collection dependencies
βββ test_freipa.sh # Simple authentication test script
βββ README.md # This documentation
β
βββ inventory/
β βββ hosts # Ansible inventory file
β
βββ group_vars/ # Group-specific variables
β βββ all.yml # Variables for all hosts
β βββ ipa_server.yml # FreeIPA server configuration
β βββ ipa_clients.yml # FreeIPA client configuration
β
βββ roles/ # Ansible roles directory
βββ common/ # Base system configuration
β βββ tasks/main.yml # Common setup tasks
β βββ handlers/main.yml # Service restart handlers
β βββ vars/main.yml # Common package definitions
β βββ templates/hosts.j2 # /etc/hosts template
β
βββ freeipa-server/ # FreeIPA server role
β βββ tasks/
β β βββ main.yml # Main server tasks
β β βββ install.yml # IPA server installation
β β βββ dns.yml # DNS records configuration
β βββ handlers/main.yml # IPA service handlers
β βββ vars/main.yml # Server package definitions
β βββ defaults/main.yml # Default configuration values
β
βββ freeipa-client/ # FreeIPA client role
β βββ tasks/
β β βββ main.yml # Main client tasks
β β βββ join.yml # Domain join operations
β βββ handlers/main.yml # SSSD service handlers
β βββ vars/main.yml # Client package definitions
β βββ defaults/main.yml # Default client configuration
β
βββ freeipa-users/ # User management role
βββ tasks/
β βββ main.yml # Main user management tasks
β βββ groups.yml # Group creation tasks
β βββ users.yml # User creation tasks
βββ vars/main.yml # User and group definitions
- VirtualBox installed
- Vagrant installed
- Ansible installed (2.9+)
- At least 8GB RAM available for VMs
git clone <repository-url>
cd FreeIPAansible-galaxy collection install -r requirements.yml # ansible-galaxy collection install freeipa.ansible_freeipavagrant upThis will:
- Create 3 VMs (1 server + 2 clients)
- Install and configure FreeIPA server with DNS
- Enroll clients to the domain
- Create HPC users and groups
chmod +x test_freipa.sh
./test_freipa.sh| Component | Value |
|---|---|
| Domain | hpc.lab |
| Realm | HPC.LAB |
| Admin Password | Admin123! |
| Directory Manager Password | Directory123! |
| Test Users | hpcuser1, hpcuser2 |
| Test Password | TempPass123 |
| Node | IP Address | RAM | Purpose |
|---|---|---|---|
| mgmt | 192.168.56.10 | 4GB | FreeIPA Server + DNS |
| compute1 | 192.168.56.11 | 2GB | Compute Node / Client |
| compute2 | 192.168.56.12 | 2GB | Compute Node / Client |
Edit these files to customize your deployment:
group_vars/all.yml- Domain, passwords, IP addressesroles/freeipa-users/vars/main.yml- Users and groupsVagrantfile- VM resources and network settings
./test_freipa.shTests:
- β User existence on all nodes
- β Valid password authentication
- β Invalid password rejection
- β User login capability
- β Group membership
# SSH to compute node
vagrant ssh compute1
# Test user lookup
getent passwd hpcuser1
# Test authentication
kinit hpcuser1
# Password: TempPass123
# Test login
sudo su - hpcuser1- Defines 3 VMs with Rocky Linux 9
- Sets up private network (192.168.56.0/24)
- Triggers Ansible provisioning on last VM
- Main playbook orchestrating all roles
- Applies roles in correct order: common β server β clients β users
- Disables host key checking for lab environment
- Sets inventory location and output format
- Base system configuration (timezone, hostname, packages)
- Firewall and time synchronization setup
/etc/hostsfile generation
- FreeIPA server installation with integrated DNS
- Firewall port configuration
- DNS A and PTR record creation
- Client enrollment to IPA domain
- SSSD configuration for user lookup
- Kerberos configuration
- Creates HPC-specific users and groups
- Configures group membership
- Sets initial passwords
- Kerberos Authentication - Strong authentication protocol
- TLS/SSL Encryption - All communications encrypted
- Firewall Configuration - Only necessary ports opened
- Certificate Management - Automated CA and certificate handling
- SSSD Integration - Secure user/group lookup caching
# Start all VMs
vagrant up
# Provision only (re-run Ansible)
vagrant provision
# SSH to specific node
vagrant ssh mgmt
vagrant ssh compute1
# Stop all VMs
vagrant halt
# Destroy all VMs
vagrant destroy -f# SSH to management node
vagrant ssh mgmt
# Authenticate as admin
kinit admin
# List users
ipa user-find
# List groups
ipa group-find
# Add new user
ipa user-add testuser --first Test --last User
# Check service status
sudo ipactl statusDNS Resolution Problems
# Check DNS on server
vagrant ssh mgmt -c "dig @localhost mgmt.hpc.lab"
# Verify DNS service
vagrant ssh mgmt -c "sudo systemctl status named"Client Enrollment Failures
# Check client enrollment
vagrant ssh compute1 -c "sudo ipa-client-install --uninstall"
vagrant provisionAuthentication Issues
# Check SSSD status
vagrant ssh compute1 -c "sudo systemctl status sssd"
# Clear Kerberos cache
vagrant ssh compute1 -c "kdestroy -A"Service Issues
# Restart all IPA services
vagrant ssh mgmt -c "sudo ipactl restart"
# Check logs
vagrant ssh mgmt -c "sudo journalctl -u ipa"
