Info nova compute manager updating host status Free online chat with slust

Did you remember to source your novarc credentials before running the command? So I am not sure why the euca connection is unable to occur. If you do have your novarc credentials sourced and it still isn't working, open them up and check the ip listed under EC2_URL.

Should I check for any specific ports where the controller needs to be listening? Make sure that you can hit that ip from whichever machine you are running the the euca commands on and that it is the ip where you are running nova-api.

I had another controller setup , with which I could move forward.

Thanks for the suggestions I am having the same issue. I followed manual installation according to cactus document. If anyone had resolved thhis issue it will be helpful.

API security_group_api = neutron linuxnet_interface_driver =

Linux OVSInterface Driver firewall_driver = firewall. Noop Firewall Driver [oslo_messaging_rabbit] rabbit_host = rabbit_userid = openstack rabbit_password = openstack [keystone_authtoken] auth_uri = auth_url = auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = nova dhcpbridge_flagfile=/etc/nova/dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True libvirt_use_virtio_for_bridges=True ec2_private_dns_show_ip=True api_paste_config=/etc/nova/enabled_apis=osapi_compute,metadata [vnc] enabled = True vncserver_listen = vncserver_proxyclient_address = novncproxy_base_url = host = [oslo_concurrency] lock_path = /var/lib/nova/tmp [neutron] url = auth_url = auth_plugin = password project_domain_id = default user_domain_id = default region_name = Region One project_name = service username = neutron password = neutron [cinder] os_region_name = Region One #/etc/nova/[DEFAULT] compute_driver=libvirt.

info nova compute manager updating host status-31info nova compute manager updating host status-40info nova compute manager updating host status-7

It appears that either your volume-node isn't able to communicate with the controller (you can check this by trying a 'cinder service-list').

When I run the command [email protected]~$ sudo rabbitmqctl list_connections Listing connections ... Looks like it is euca having trouble talking to the api.

You should make sure that the ip in the credentials you are using is the correct ip of your nova-api host and nova-api is running.

Another possibility is something's not quite right with your message queue settings.

Main thing here would again be name resolution and that your message queue is actually matched up here (ie settings on the controller match those on your volume-node).\n sys.exit(main())\n File "/usr/lib/python2.7/dist-packages/oslo/rootwrap/", line 107, in main\n filters = wrapper.load_filters(config.filters_path)\n File "/usr/lib/python2.7/dist-packages/oslo/rootwrap/", line 119, in load_filters\n for (name, value) in filterconfig.items("Filters"):\n File "/usr/lib/python2.7/Config", line 347, in items\n raise No Section Error(section)\n Config Parser.

Leave a Reply