Ich habe OpenSatck Yoga in Ubuntu 22.04 eingerichtet. Ich habe nach der Installation jeden Überprüfungsschritt durchlaufen, und alles hat gut funktioniert. Ich habe 1x Controller und 1x Compute. In meinem Controller bemerke ich immer wieder diese Meldung.
==> /var/log/nova/nova-conductor.log <==
2022-11-28 08:35:58.338 76768 WARNING oslo_messaging._drivers.amqpdriver [req-9a1d29ba-756a-4a94-bef1-7c1caba6fb8d - - - - -] reply_284f5c12afcb4d0cb6504c70a01b458f doesn't exist, drop reply to 3357a913567c464fb48f7cfb47768a13: oslo_messaging.exceptions.MessageUndeliverable
2022-11-28 08:35:58.340 76768 ERROR oslo_messaging._drivers.amqpdriver [req-9a1d29ba-756a-4a94-bef1-7c1caba6fb8d - - - - -] The reply 3357a913567c464fb48f7cfb47768a13 failed to send after 60 seconds due to a missing queue (reply_284f5c12afcb4d0cb6504c70a01b458f). Abandoning...: oslo_messaging.exceptions.MessageUndeliverable
Ich bin nicht sicher, wie ich diesen Fehler beheben kann.
das ist meine Nova-Konferenz.
$ sudo egrep -v '^#|^$' /etc/nova/nova.conf
[DEFAULT]
log_dir = /var/log/nova
lock_path = /var/lock/nova
state_path = /var/lib/nova
my_ip = 10.0.0.154
transport_url = rabbit://openstack:openstack@controller1:5672/
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller1/nova_api
[barbican]
[barbican_service_user]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[cyborg]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller1/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller1:9292
[guestfs]
[healthcheck]
[hyperv]
[image_cache]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
www_authenticate_uri = http://controller1:5000/
auth_url = http://controller1:5000/
memcached_servers = controller1:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova
[libvirt]
[metrics]
[mks]
[neutron]
auth_url = http://controller1:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
[notifications]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller1:5000/v3
username = placement
password = placement
[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[workarounds]
[wsgi]
[zvm]
[cells]
enable = False
[os_region_name]
openstack =
Hier ist der RabbitMQ-Status
$ sudo rabbitmqctl cluster_status
Cluster status of node rabbit@controller1 ...
Basics
Cluster name: rabbit@controller1
Disk Nodes
rabbit@controller1
Running Nodes
rabbit@controller1
Versions
rabbit@controller1: RabbitMQ 3.9.13 on Erlang 24.2.1
Maintenance status
Node: rabbit@controller1, status: not under maintenance
Alarms
(none)
Network Partitions
(none)
Listeners
Node: rabbit@controller1, interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication
Node: rabbit@controller1, interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0
Feature flags
Flag: implicit_default_bindings, state: enabled
Flag: maintenance_mode_status, state: enabled
Flag: quorum_queue, state: enabled
Flag: stream_queue, state: enabled
Flag: user_limits, state: enabled
Flag: virtual_host_metadata, state: enabled
Hier sind die Richtlinien
$ sudo rabbitmqctl list_policies
Listing policies for vhost "/" ...
Hier sind die Berechtigungen
$ sudo rabbitmqctl list_permissions
Listing permissions for vhost "/" ...
user configure write read
guest .* .* .*
openstack .* .* .*
Ich habe einen Nova-Conductor-Stopp und dann einen Start durchgeführt. Folgendes wird protokolliert
tail -f /var/log/rabbitmq/rabbit*.log /var/log/nova/nova-*.log
Dies ist das Protokoll nach dem Neustart von rabbitmq
Es ist keine aktive Firewall vorhanden. Die Dienste werden auf demselben Server ausgeführt.
$ sudo ufw status
Status: inactive