Blog
  • Welcome to egonzalez blog
  • Software Supply Chain Security
    • Software Supply Chain Security: Why It Matters
    • Software Supply Chain Security: A Technical Deep Dive
    • SLSA and the Software Supply Chain Security: Time to Get Serious
  • Provenance
    • Understanding Provenance in Software Supply Chain Security
  • Building a secure development framework
  • Hacking
    • Index
      • Hack the box writeups
        • Dyplesher HTB writeup
        • Fatty HTB writeup
        • Oouch HTB writeup
        • Sauna HTB writeup
      • Python Vulnerabilities
        • Data Deserialization
          • Pickle
          • XML
          • YAML
      • Hacking cheatsheet
  • DevSecOps
    • Index
      • Gitlab CI minikube development environment
      • Gerrit review minikube
      • Gerrit and gitlab replication and CI job hooks on k8s
      • Vault integration with Gitlab CI
      • Gitlab CI template for DefectDojo
      • Falco real time runtime thread detection on k8s
      • Zarf - Airgap deployment in kubernetes
      • OWASP Dependency-track
      • OpenDaylight in a Docker
      • To conditional or to skip, that's the Ansible question
      • Spacewalk Red Hat Satellite v5 in a Docker container PoC
      • Ansible INI file module
  • OpenStack
    • Index
      • OpenStack tacker and service function chaining sfc with kolla
      • Deploy OpenStack designate with kolla-ansible
      • OpenStack keystone zero downtime upgrade process newton to ocata
      • Midonet integration with OpenStack Mitaka
      • OpenStack kolla deployment
      • Magnum in RDO OpenStack Liberty
      • Nova VNC flows under the hood
      • Ceph Ansible baremetal deployment
      • Rally OpenStack benchmarking with Docker
      • OpenStack affinity/anti-affinity groups
      • Migrate keystone v2.0 to keystone v3 OpenStack
      • Neutron DVR OpenStack Liberty
      • OpenStack segregation with availability zones and host aggregates
      • Nova Docker driver
      • Murano in RDO OpenStack manual installation
      • Ceph RadosGW admin Ops
      • Multiple store locations for glance images
      • List all tenants belonging an user
      • Load balancer as a service OpenStack LbaaS
      • OpenStack nova API start error
      • Delete OpenStack neutron networks
Powered by GitBook
On this page

Was this helpful?

  1. OpenStack
  2. Index

OpenStack affinity/anti-affinity groups

PreviousRally OpenStack benchmarking with DockerNextMigrate keystone v2.0 to keystone v3 OpenStack

Last updated 5 years ago

Was this helpful?

In a previous post, you learned how to segregate resources with , those methods allows the end user to specify where and on which types of resources their instances should be running.

At this post, you will learn how specify to nova where nova-scheduler should schedule your instances based on two policies. These policies define if instances should share the same hypervisor (affinity rule) or if not depending of user needs(anti-affinity rule).

First, you need to modify nova.conf and allow nova-scheduler to filter based on affinity rules. Add ServerGroupAntiAffinityFilter and ServerGroupAffinityFilter filters to scheduler default filter option.

# vi /etc/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

Restart nova-scheduler to apply changes

systemctl restart openstack-nova-scheduler

Once nova-scheduler has been restarted, we can create a group of servers based on affinity policy (All instances at this group will be launched in the same hypervisor)

nova server-group-create instancestogethergroup affinity
+--------------------------------------+------------------------+---------------+---------+----------+
| Id                                   | Name                   | Policies      | Members | Metadata |
+--------------------------------------+------------------------+---------------+---------+----------+
| 27abe662-c37e-431c-9715-0d2137fc5519 | instancestogethergroup | [u'affinity'] | []      | {}       |
+--------------------------------------+------------------------+---------------+---------+----------+

Now create two instances, add --hint group=GROUP-ID option to specify the group where instances will be members.

nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=27abe662-c37e-431c-9715-0d2137fc5519 affinity1
nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=27abe662-c37e-431c-9715-0d2137fc5519 affinity2

Ensure the instances are properly mapped to the group.

nova server-group-get 27abe662-c37e-431c-9715-0d2137fc5519 
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+
| Id                                   | Name                   | Policies      | Members                                                                            | Metadata |
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+
| 27abe662-c37e-431c-9715-0d2137fc5519 | instancestogethergroup | [u'affinity'] | [u'b8b72a0a-c981-430e-a909-13d23d928655', u'8affefff-0072-47e3-8d11-2ddf26e48b82'] | {}       |
+--------------------------------------+------------------------+---------------+------------------------------------------------------------------------------------+----------+

Once instances are running, ensure they share the same hypervisor as we specify in the affinity policy.

# nova show affinity1 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az
# nova show affinity2 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az  

Now we create an anti-affinity policy based group.

nova server-group-create farinstancesgroup anti-affinity
+--------------------------------------+-------------------+--------------------+---------+----------+
| Id                                   | Name              | Policies           | Members | Metadata |
+--------------------------------------+-------------------+--------------------+---------+----------+
| 988a9fd2-3a97-481e-b083-fee36b33009d | farinstancesgroup | [u'anti-affinity'] | []      | {}       |
+--------------------------------------+-------------------+--------------------+---------+----------+

Launch two instances and attach them to the anti-affinity group.

nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=988a9fd2-3a97-481e-b083-fee36b33009d anti-affinity1
nova boot --image a6d7a606-f725-480a-9b1b-7b3ae39b93d4 --flavor m1.tiny --nic net-id=154da7a8-fa49-415e-9d35-c840b144a8df --hint group=988a9fd2-3a97-481e-b083-fee36b33009d anti-affinity2

Ensure the instances are in the anti-affinity group

nova server-group-get 988a9fd2-3a97-481e-b083-fee36b33009d 
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+
| Id                                   | Name              | Policies           | Members                                                                            | Metadata |
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+
| 988a9fd2-3a97-481e-b083-fee36b33009d | farinstancesgroup | [u'anti-affinity'] | [u'cfb45193-9a7c-436f-ac2d-59a7a9a854ae', u'25dc8671-0c9a-4774-90cf-7394380f91ef'] | {}       |
+--------------------------------------+-------------------+--------------------+------------------------------------------------------------------------------------+----------+

Once instances are running, ensure they are in different hypervisors as we specify in the anti-affinity policy.

# nova show anti-affinity1 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute2az
# nova show anti-affinity2 | grep hypervisor_hostname
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute1az   

Regards, Eduardo Gonzalez

Availability Zones and Host Aggregates