Hello everyone,
I have a problem with my pool and could use any help anyone could provide. The short of it all, is that I was having issues with multipathing errors on my xenservers and made a multipath config change, but it did not seem to work. However, I tried to do a xe-toolstack-restart on the master as another thread discussed using this command to potentially re-read config changes. In any case, after that command, the master xapi did not recover. The cause appears to be that xapi seems to be waiting for a "dying domain" to finish dying (?).
Here are some commands that may help shed some light. From the command-line, this is what happens:
[root@ceres bin]# xe-toolstack-restart
Stopping xapi: cannot stop xapi: xapi is not running. [FAILED]
Starting xapi: .........................................................................failed to start xapi (daemon disappeared) [FAILED]
After looking at the logs, I came across a string of information/errors in the /var/log/xensource.log (which is repeated after several attempts to use the xe-toolstack-restart command):
[20090512 20:03:26.936|debug|ceres|0 thread_zero|dbsync (update_env) D:7b6106530e3b|xenops] Domain 45 still exists (domid=45; uuid=deadbeef-dead-beef-dead-beef0000002d): waiting for it to disappear.
[20090512 20:03:26.936|debug|ceres|0 thread_zero|dbsync (update_env) D:7b6106530e3b|xenops] Domain 45 still exists (domid=45; uuid=deadbeef-dead-beef-dead-beef0000002d): waiting for it to disappear.
[20090512 20:03:26.936| warn|ceres|0 thread_zero|dbsync (update_env) D:7b6106530e3b|xenops] Domain stuck in dying state after 30s; resetting UUID to deadbeef-dead-beef-dead-beef0000002d
[cut several lines]
[20090512 20:03:26.937|debug|ceres|0 thread_zero||backtrace] Raised at pervasiveext.ml:13.22-25 -> xapi.ml:614.4-1023 ->
[20090512 20:03:26.937|debug|ceres|0 thread_zero||xapi] xapi top-level caught exception: INTERNAL_ERROR: [ Domain.Domain_stuck_in_dying_state(45) ]
[20090512 20:03:26.937|error|ceres|0 thread_zero||xapi] Caught exception at toplevel: 'Domain.Domain_stuck_in_dying_state(45)'
[20090512 20:03:26.937|debug|ceres|0 thread_zero||xapi] Raised at xapi.ml:782.98-101 -> xapi.ml:791.4-8 -> xapi.ml:797.15-16 -> xapi.ml:827.6-22 ->
[log ends after FAILED message]
So, it looks like xapi will not start since it's waiting on a "dying" domain.
If I type command, /opt/xensource/bin/list_domains, I see:
id | uuid | state
0 | 4a44bb60-d8fc-405a-8f79-fcbf0b566679 | R
9 | 14a33310-ce9c-52ae-ec7c-a7f13ed0288a | H
13 | 19c58c2a-c14c-7ffb-24ed-45d7097cdacd | R
22 | efec482b-89b5-f2f4-5620-32c8f87cda45 | B
23 | 254f52de-7594-fa7a-c6db-7e17766bb083 | B
40 | 58090db8-f37f-b9b2-972c-8dbf6b91bc74 | B
42 | 716cbd4d-1df7-2110-d677-e9e2b07e84c4 | B
45 | deadbeef-dead-beef-dead-beef0000002d | D B
So, there's domain 45 that has an odd uuid. And, if I look to see if there is a matching process with that domain, I do not see it:
[root@ceres bin]# ps aux|grep domain
root 2061 0.0 0.1 14960 1180 ? S May07 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/40/serial/0
136978 2077 0.0 0.1 15224 1324 ? S May07 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/40/serial/0
root 14846 0.0 0.2 5372 2140 ? Ss Apr04 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1 -r -x /local/domain/0/serial/0 -c /usr/lib/xen/bin/dom0term.sh
root 15047 0.0 0.1 14960 1180 ? S Apr23 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/22/serial/0
136975 15088 0.0 0.1 15092 1212 ? S Apr23 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/22/serial/0
root 15386 0.0 0.1 14960 1180 ? S Apr23 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/23/serial/0
136980 15428 0.0 0.1 15092 1212 ? S Apr23 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/23/serial/0
root 19264 0.0 0.1 14956 1168 ? S Apr10 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/13/serial/0
136982 19280 0.0 0.2 16052 2176 ? S Apr10 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/13/serial/0
root 25906 0.0 0.1 14960 1180 ? S May08 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/42/serial/0
136983 25925 0.0 0.2 15468 1592 ? S May08 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/42/serial/0
So, my question is, is there any way to clear this dying domain so that xapi can restart, assuming that this dying domain is cauing xapi from restarting??
Help!
Thanks,
Ed
BTW, I obviously do not want to restart the master except as the very last resort.
Members
-
-
#1
Posted 13 May 2009 - 03:18 AM
Citrix Employees
-
-
#2
Posted 13 May 2009 - 06:46 AM
Edwin,
you may want to try the following:
* double check the domain id
* run the command below:
> /opt/xensource/debug/xenops destroy_domain -domid 45
You can also try xenops shutdown_domain or hard_shutdown_domain if the above does not work.
Regards
Dmitry
Members
-
-
#3
Posted 26 June 2009 - 11:01 AM
Dmitry,
if the domain still exists after a hard shutdown with xenops:
6 | DS H | 258418785908598 | deadbeef-dead-beef-dead-beef00000006
and I get
# /opt/xensource/debug/xenops destroy_domain -domid 6
Fatal error: exception Domain.Domain_stuck_in_dying_state(6)
is there still a way to revive xapi without rebooting the host?
Thank you!
Joern
Members
-
-
#4
Posted 01 October 2009 - 01:37 PM
Hi Edwin, Joern,
Did you by any chance manage to solve this without rebooting the Xen server, I've encountered the same situation unfortunately, and I didn't make many friends when I had to restart the server...
It really seems to me the VM is down (no process running, not even a zombie one), and I'm pretty sure there is a way to have Xen understand this.
Kind regards.
Members
-
-
#5
Posted 01 October 2009 - 04:19 PM
Hi Georges,
I wish I had good news for you. I finally resorted to rebooting the host. From what I remember, I believe the VMs were in some inconsistent state after the reboot, and I had to do some xen magic to get them out of that inconsistent state -- at least, I didn't lose any VMs or data though.
Edwin
Citrix Employees
-
-
#6
Posted 06 October 2009 - 04:46 PM
Hi Georges,
Perhaps you need to reset the power state of the VM ...
xe vm-reset-powerstate vm=<Name of VM> force=true
Once that has been done, you should be able to start the VM. Another thing to try is to ensure the VM is shutdown:
xe vm-shutdown force=true vm=<name of VM>
Regards,
James Cannon
Citrix Employees
-
-
#7
Posted 23 February 2010 - 08:18 PM
> {quote:title=iPlant wrote:}{quote}
> id | uuid | state
> 0 | 4a44bb60-d8fc-405a-8f79-fcbf0b566679 | R
> 9 | 14a33310-ce9c-52ae-ec7c-a7f13ed0288a | H
> 13 | 19c58c2a-c14c-7ffb-24ed-45d7097cdacd | R
> 22 | efec482b-89b5-f2f4-5620-32c8f87cda45 | B
> 23 | 254f52de-7594-fa7a-c6db-7e17766bb083 | B
> 40 | 58090db8-f37f-b9b2-972c-8dbf6b91bc74 | B
> 42 | 716cbd4d-1df7-2110-d677-e9e2b07e84c4 | B
> 45 | deadbeef-dead-beef-dead-beef0000002d | D B
>
> So, there's domain 45 that has an odd uuid. And, if I look to see if there is a matching process with that domain, I do not see it:
>
> [root@ceres bin]# ps aux|grep domain
> root 2061 0.0 0.1 14960 1180 ? S May07 0:00 /usr/lib/xen/bin/vncterm -v 127.0.0.1:1 -x /local/domain/40/serial/0
> (...snip...)
Waking up the thread, not sure how old it is, but since I was working with a similar case I thought I'd leave here some notes from what I learned:
The uuid shown as deadbeef-dead-beef-dead-beef..., etc is a temporary uuid after the domain is no longer being executed, e.g after shutdown completion, crash or forced shutdown. This prevents xapi from becoming confused when querying the list of domains.
When the guest is no longer being scheduled to run, yet the domain is still alive (i.e xen didn't properly finish the its destruction), xapi will rename the domain's uuid. At this point, it's pointless to run xenops destroy_domain, as Xen is already struggling to destroy the domain. A power-reset will mainly change xapi's view of the domain, but xapi by now is already aware the domain is dead since it already renamed its uuid to deadbeef...
This is fixed in supposedly 5.5.1 and should affect mostly Windows VMs, but I heard reports from Linux VMs as well!
Members
-
-
#8
Posted 07 September 2010 - 08:29 AM
We do have the same problem right now on one Xenserver 5.5. Domid 266 has a deadbeef uuid, xapi does not start with same error messages as above.
We could shutdown all running VMs via rlogin or RDP, then reboot the Xen host itself.
However, can we be sure that a reboot of the Xenhost itself succeeds and xapi is going to work again? Or will that deadbeef domain stuck in dying state prevent this?
It would be a desaster if the VMs cannot be started again after booting the host due to xapi service still not working.
Of course we plan to upgrade to Xenserver 5.6, but first we need to recover from the xapi problem and be able to perform backups before upgrading.
Best regards
--Stefan
Members
-
-
#10
Posted 01 November 2010 - 06:56 PM
After almost 2 months, and demanding my CTX case be escalated to developers...Citrix support lost my self-produced crash dumps. Instead of asking the customer for another downtime window to crash their systems, I have closed the CTX case. However, 3 other SR's have been created with Citrix for this exact issue.
Another words, they know of the issue and have no resolution as of 10/31/2010.
Members
-
-
#11
Posted 17 November 2010 - 07:27 AM
Hi Edwin,
I have a similar problem which says Domain_stuck_in_dying_state if i give service xapi start. I wanted to reboot the host and try as you did. As of now there are 4 VM which is working fine. But after reboot, will i be able to start xapi?. As you said you have done some "xen magic", can i know what that means please?.
Regards
Vijay
Members
-
-
#12
Posted 23 February 2011 - 05:45 AM
I don't know if this particular solution would work for everyone, but I just hung up with Citrix support and was able to get through this. Here was my situation:
First symptom: All of the VMs on one of my hosts (the pool master) in a 2 server pool were unable to communicate with our core production LAN. I called for help, suspecting a NIC problem on the Master host.
At the time I was able to se/use the XC the console to manage guests on the master host, but they were unable to communicate with any other host on the network, including other guests on the master.
The 1st support engineer observed some oddities in the XC GUI - the network pane for those VMs was empty. To try and resolve this, he ran an xe-toolstack restart. that's where the trouble really started, because the XAPI failed to start, meaning that the host was no longer available in the XC GUI and no more xe commands would work at the ssh CLI. We got to the point that Joern Westermann did in his 6/26/2009 post - we had a "deadbeef" UUID (coincidentally also at domid6). results were:
/opt/xensource/debug/xenops destroy_domain -domid 6
Fatal error: exception Domain.Domain_stuck_in_dying_state
We promoted the slave to be the master using the xe pool-emergency-transition-to-master command on the slave, so that the slave didn't freak out with an extended absence by its master.
On the "busted" master, every time we tried to start the XAPI, it failed to do so. Here's how we got around it:
It turned out that the server still thought it was the master. The 2nd tech we worked with looked in /etc/xensource/pool.conf and found that it said "master". Once we edited this (using vi) to read:
slave:192.168.20.203 (our IP address of the previously-slave-now-master machine)
We were able to start the XAPI and the machine and its guests came right back in the XC GUI! xe-toolstack-restart worked great at that point.
So then i was basically back to square 1 - the guests still couldn't communicate to the network, but I was able to go in to to each guest and manually issue shutdown commands. That took a bit, but I was at least able to gracefully shut them down. As an important aside, all my Windows 2008 machines blue screened at the end of that process, citing a "driver_power_failure" or something like that (sorry, it got late and I forgot to take a screen shot).
Once all the guests were shut down, a simple "reboot" command at the ssh CLI on the formerly-master-now-slave was all it took to get the server back up and running in the pool, happily, and I was able to start all of the guests back up and it all seems to have gone fine.
I hope this helps some of you guys& gals out there. it's clear to me Citrix could handle this a but more gracefully; the xe-toolstack-restart command, upon failure, could at least ask you "has this host been made a slave to another master?" and do the VI trick on pool.conf file for you, and eliminate some support calls.
I would love to get a root cause analysis on this as well - can't have dozens of servers just dropping off of the network willy-nilly!
Edited by: Seth Miller on Feb 23, 2011 12:45 AM
Members
-
-
#13
Posted 03 March 2011 - 11:45 PM
This was super helpful for me and I really appreciate you posting!
Citrix Employees
-
-
#14
Posted 01 April 2011 - 09:37 PM
Before the xenops command, need to kill processes associated with the dom-id:
ps -ef | grep +dom_id+
In order to get the dom_id please use "list_domains" command.
Regards,
+James Cannon+
Members
-
-
#15
Posted 26 September 2012 - 08:22 PM
I had the problem of dead beef and the only solution was to reboot the HV. I tried to kill the processes associated with, but it has not been possible to. i'm running Xenserver 6.0 50762p.
Members
-
-
#16
Posted 21 December 2012 - 07:08 PM
Same issue on Xenserver 6.0.2 with updates 1,2,3,4,5,6,7,8,9,11,13,14,16 applied
Have a vm that cannot be shutdown or rebooted. DOMID=7
[root@UBCXENSRV7 ~]# list_domains
id | uuid | state
0 | 8747e425-7577-4596-b44f-2084e143e1e0 | R
1 | 11481f2e-3aa4-ea92-7a8b-d9779e30d081 | B H
2 | 851f9abc-53d7-2226-7302-482abe4f99be | RH
3 | bbe44281-f936-df75-28f5-16c2a60a9223 | B H
4 | afa85997-597b-e77e-1611-f62526c30d97 | B H
5 | cf3615d3-cb6d-3022-2e7e-41b143529798 | B H
6 | e1fb9760-91d2-b3c0-d973-60d44fae1ca5 | B H
7 | 4849ba7c-ae48-af21-ea12-2abc5c9b4732 | DS
Try to destroy it...
[root@UBCXENSRV7 ~]# /opt/xensource/debug/destroy_domain -debug -domid 7
[20121221T18:21:04.566Z|debug|UBCXENSRV7|0||xenops] Domain.destroy: all known devices = [ frontend (domid=7 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712); frontend (domid=7 | kind=vbd | devid=51728); backend (domid=0 | kind=vbd | devid=51728); frontend (domid=7 | kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0) ]
[20121221T18:21:04.566Z|debug|UBCXENSRV7|0||xenops] Domain.destroy calling Xc.domain_destroy (domid 7)
[20121221T18:21:04.567Z|debug|UBCXENSRV7|0||xenops] Device.Vbd.hard_shutdown frontend (domid=7 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712)
[20121221T18:21:04.567Z|debug|UBCXENSRV7|0||xenops] Device.Vbd.request_shutdown frontend (domid=7 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712) force
[20121221T18:21:04.567Z|debug|UBCXENSRV7|0||xenops] xenstore-write /local/domain/0/backend/vbd/7/51712/shutdown-request = force
[20121221T18:21:04.568Z|debug|UBCXENSRV7|0||xenops] watch: watching xenstore paths: [ /local/domain/0/backend/vbd/7/51712/shutdown-done ] with timeout 1200.000000 seconds
[20121221T18:41:04.575Z|debug|UBCXENSRV7|0||xenops] Caught exception Watch.Timeout(1200.) while destroying device frontend (domid=7 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712)
[20121221T18:41:04.575Z|debug|UBCXENSRV7|0||xenops] Device.Vbd.hard_shutdown frontend (domid=7 | kind=vbd | devid=51728); backend (domid=0 | kind=vbd | devid=51728)
[20121221T18:41:04.576Z|debug|UBCXENSRV7|0||xenops] Device.Vbd.request_shutdown frontend (domid=7 | kind=vbd | devid=51728); backend (domid=0 | kind=vbd | devid=51728) force
[20121221T18:41:04.576Z|debug|UBCXENSRV7|0||xenops] xenstore-write /local/domain/0/backend/vbd/7/51728/shutdown-request = force
[20121221T18:41:04.577Z|debug|UBCXENSRV7|0||xenops] watch: watching xenstore paths: [ /local/domain/0/backend/vbd/7/51728/shutdown-done ] with timeout 1200.000000 seconds
[20121221T19:01:04.585Z|debug|UBCXENSRV7|0||xenops] Caught exception Watch.Timeout(1200.) while destroying device frontend (domid=7 | kind=vbd | devid=51728); backend (domid=0 | kind=vbd | devid=51728)
[20121221T19:01:04.585Z|debug|UBCXENSRV7|0||xenops] Device.Vif.hard_shutdown frontend (domid=7 | kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20121221T19:01:04.585Z|debug|UBCXENSRV7|0||xenops] xenstore-write /local/domain/0/backend/vif/7/0/online = 0
[20121221T19:01:04.585Z|debug|UBCXENSRV7|0||xenops] Device.Vif.hard_shutdown about to blow away frontend
[20121221T19:01:04.585Z|debug|UBCXENSRV7|0||xenops] xenstore-rm /local/domain/7/device/vif/0
[20121221T19:01:04.586Z|debug|UBCXENSRV7|0||xenops] watch: watching xenstore paths: [ /xapi/7/hotplug/vif/0/hotplug ] with timeout 1200.000000 seconds
[20121221T19:01:04.586Z|debug|UBCXENSRV7|0||xenops] Device.Vif.hard_shutdown about to blow away backend and error paths
[20121221T19:01:04.586Z|debug|UBCXENSRV7|0||xenops] Device.rm_device_state frontend (domid=7 | kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20121221T19:01:04.586Z|debug|UBCXENSRV7|0||xenops] xenstore-rm /local/domain/7/device/vif/0
[20121221T19:01:04.586Z|debug|UBCXENSRV7|0||xenops] xenstore-rm /local/domain/0/backend/vif/7/0
[20121221T19:01:04.587Z|debug|UBCXENSRV7|0||xenops] xenstore-rm /local/domain/0/error/backend/vif/7
[20121221T19:01:04.587Z|debug|UBCXENSRV7|0||xenops] xenstore-rm /local/domain/7/error/device/vif/0
[20121221T19:01:04.587Z|debug|UBCXENSRV7|0||hotplug] Hotplug.release: frontend (domid=7 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712)
[20121221T19:01:04.587Z|debug|UBCXENSRV7|0||hotplug] Hotplug.wait_for_unplug: frontend (domid=7 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712)
[20121221T19:01:04.587Z|debug|UBCXENSRV7|0||xenops] watch: watching xenstore paths: [ /xapi/7/hotplug/vbd/51712/hotplug ] with timeout 1200.000000 seconds
[20121221T19:01:04.588Z|debug|UBCXENSRV7|0||hotplug] Synchronised ok with hotplug script: frontend (domid=7 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712)
[20121221T19:01:04.588Z|debug|UBCXENSRV7|0||hotplug] Hotplug.release: frontend (domid=7 | kind=vbd | devid=51728); backend (domid=0 | kind=vbd | devid=51728)
[20121221T19:01:04.588Z|debug|UBCXENSRV7|0||hotplug] Hotplug.wait_for_unplug: frontend (domid=7 | kind=vbd | devid=51728); backend (domid=0 | kind=vbd | devid=51728)
[20121221T19:01:04.588Z|debug|UBCXENSRV7|0||xenops] watch: watching xenstore paths: [ /xapi/7/hotplug/vbd/51728/hotplug ] with timeout 1200.000000 seconds
[20121221T19:01:04.588Z|debug|UBCXENSRV7|0||hotplug] Synchronised ok with hotplug script: frontend (domid=7 | kind=vbd | devid=51728); backend (domid=0 | kind=vbd | devid=51728)
[20121221T19:01:04.588Z|debug|UBCXENSRV7|0||hotplug] Hotplug.release: frontend (domid=7 | kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20121221T19:01:04.589Z|debug|UBCXENSRV7|0||hotplug] Hotplug.wait_for_unplug: frontend (domid=7 | kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20121221T19:01:04.589Z|debug|UBCXENSRV7|0||xenops] watch: watching xenstore paths: [ /xapi/7/hotplug/vif/0/hotplug ] with timeout 1200.000000 seconds
[20121221T19:01:04.589Z|debug|UBCXENSRV7|0||hotplug] Synchronised ok with hotplug script: frontend (domid=7 | kind=vif | devid=0); backend (domid=0 | kind=vif | devid=0)
[20121221T19:01:04.589Z|debug|UBCXENSRV7|0||xenops] Domain.destroy: rm /local/domain/7
[20121221T19:01:04.589Z|debug|UBCXENSRV7|0||xenops] Domain.destroy: deleting backend paths
[20121221T19:01:04.590Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
[20121221T19:01:09.605Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
[20121221T19:01:14.615Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
[20121221T19:01:19.625Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
[20121221T19:01:24.636Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
[20121221T19:01:29.646Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
[20121221T19:01:34.656Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
[20121221T19:01:34.656Z|debug|UBCXENSRV7|0||xenops] Domain 7 still exists (domid=7; uuid=deadbeef-dead-beef-dead-beef00000007): waiting for it to disappear.
Fatal error: exception Domain.Domain_stuck_in_dying_state(7)
[root@UBCXENSRV7 ~]#
The only solution we have found so far is to bounce the Xenserver physical host. This is not ideal.
Members
-
-
#17
Posted 15 October 2013 - 09:56 AM
Same Problem here...
I tried all suggestions but none worked.
#list_domains
id | uuid | state
0 | da3118a9-dc39-40ff-b9ae-40cd2b556eee | R
5 | deadbeef-dead-beef-dead-beef00000005 | DS
8 | 15b4ef9c-3d6c-a0f2-4eb4-309d1417fe2b | B
425 | 930b87a0-7511-9235-c876-14460ae7847f | B
768 | 78000086-4527-b20f-48c7-cf0f81efe28e | B
876 | 12c40f6d-0b50-f755-b9cd-b1be6f8f292c | B H
881 | c89cc188-09a6-7650-f31b-14888288dc4d | B H
888 | 78110bcb-a95e-e616-1932-3a7b25800938 | B H
892 | 7a0569dd-4c3d-a901-068a-bbfbb45239ed | B H
901 | 6fd6a729-a862-e94d-6d0e-eaf9c3c30884 | B H
908 | de9a3564-71dc-a2d5-99a4-bf2eea6a6164 | B H
914 | 1e85c6b9-528e-65b0-d391-d84a2da4f923 | B H
915 | 9305ca61-96ed-1683-8913-be839e24da7d | B
root@eonwe ~# /opt/xensource/debug/xenops destroy_domain -domid 5
Fatal error: exception Domain.Domain_stuck_in_dying_state(5)
Members
-
-
#18
Posted 15 October 2013 - 10:01 AM
same problem here on xenserver 6.2 (patches XS62E001 and XS62E002)
we encountered nearly the same problems running xen server version 3 (not citrix xenserver, but xen on mandriva linux)
normally when the counter NETTX (at xentop) reached 4GB this counter is reset to 0. Sometimes that did not work, the counter hung at 4GB. Because every network traffic was blocked the only way was to restart the VM. In some cases the VM then was a zombie. There was no way to kill that zombie, we need to reboot the physical host.
In xentop that state was "ds----" exactly the same like now in the citrix xenserver 6.2 (list_domains) "DS ".
Edited by: Thomas Rolle on 15.10.2013 06:04
Members
-
-
#19
Posted 23 February 2015 - 04:24 PM
I just had the same issue. Citrix support assisted me with this and sent me the following link to describe the deadbeef that you can see as the uuid when you run the list_domains command.
http://thexenserver.blogspot.com/search/label/Deadbeef
Members
-
-
#20
Posted 28 April 2015 - 03:24 PM
I'm facing the same porblem, Xen 6.5 fully patched.
Yesterday reboot Xen and fixed the problem, today the problem returns, a Windows Server machine freezes and I cannot shutdown it, I have opened case on Citrix but seeing all related problems at this forum I have no hopefulness that this will be resolved soon.
