Introduction
In some rare circumstances, the situation arises that - despite several attempts at deleting it - certain data objects do not get removed from the LANDesk Console.
Usually, this is more common to devices, but can potentially apply to any kind of data (such as "Custom Definitions" or anything else). The basic approach and troubleshooting steps are the same, regardless of what item it is exactly, that refuses to be deleted.
Symptoms include/tend to be the following:
- After apparently deleting an object (a device, a vulnerability, etc.) and either refreshing the view (via "F5" or pressing the refresh button) / re-opening the console, the allegedly deleted object is again/still present
KEY FACT - please note that SQL Express does not come with SQL Profiler, which will most likely be needed here. The Data Profiler is part of the full SQL Server Suite, so the "free" version does not have it. However, if you have SQL Server installed in your network somewhere, you can use its SQL Profiler to connect to a SQL Express instance just fine.
WHEN does this happen? WHY does this happen?
This is USUALLY seen in situations where SQL Server sits on a VM (as this can cause various timing issues to show their ugly heads). There's no "hard" rule to the "when" - it's effectively a case of "when something goes wrong with your DBMS" (Database Management System).
As to the "Why" ... well - this is more of a question to be asked of Microsoft / Oracle. As relational databases, this sort of thing *SHOULDN'T* happen - and it tends to do so only rarely. After all, the whole point of relational databases *IS* that data is connected to one another. To put things into perspective - in an entire year, EMEA support would get maybe 1-2 calls (for the entirety of the European region) per year, if that. So it's not a very common problem.
Most commonly, this problem has been seen on DBMS-servers that were virtualized, where perhaps some disk I/O couldn't be guaranteed and/or caused some race condition.
All the same, this sort of thing DOES happen from time to time in real-world environments (both Oracle and SQL-based), and all of them tend to require essentially the same plan of attack -- find which rule/constraint caused the problem. Once you know that, you can locate the data easily, and delete it manually. After this, it's a case of "rinse & repeat". In MOST situations, there is just a single occurrence of such orphaned data, but it is possible to run into more (especially if databases have never been maintained, for instance).
From an "inner logic WHY" perspective, the explanation is very straightforward at least. Here goes:
- The LANDesk-console goes through what inventory data it's aware that the troublesome data-object has and creates its own "tree" structure as to what data needs to be deleted and in what order.
- This is a DYNAMIC process because different devices will have different levels of data.
- The console then goes and executes its built SQL-plan without committing the TRANSACT statement until the very end.
- Because there are some orphaned data (that isn't visible in the inventory tree) linked to the object being deleted, the DBMS will protest the deletion at the latest when we try to delete the root item itself (i.e.. "delete device X from COMPUTER-table").
- ... and because the DBMS stops/fails this delete statement, the ENTIRE transaction is rolled back and nothing is deleted (after all - this *IS* a relational database).
- ... which is why the apparently deleted object re-appears when you refresh the console view - it is in fact NOT deleted.
Preparations
There's nothing to stop you from running the SQL trace on a live server - from a technical point of view. However, from a "quality of life" and "minimizing disruption" basis, it's worth taking a full DB backup from the live system and restoring it into a lab system. The reasons for this are various:
- First of all - you now *HAVE* a full DB backup in case you "delete the wrong thing" or so by accident.
- You can merrily stop ALL services (32-bit console doesn't need any to start) or "nearly all services" (if the problem is relying on 1 service running). Either way, you will massively reduce the amount of clutter in your SQL trace. Even when "apparently idling", there's a continuous stream of pokes & prods from the Scheduler service (for instance) to check if it needs to do something / anything.
- You can "test" around in relative safety of not breaking a live server, thus developing all the necessary SQL (and being able to test & re-test) in a safe environment, until you have all of the SQL "prepped and ready" and can thus minimise any "downtime" (due to stopping of services) in live to a minimum.
Again, the idea is to stop ALL services, but the most important thing is to do this on a "quiet" server, so as not to drown in SQL statements. Yes, this stuff can be filtered, but why create yourself needless headaches & obstacles to be overcome?
IMPORTANT REMINDER:
Because you're going to perform some sort of DELETE-action, do make sure to have stopped the LANDesk Inventory Service BEFORE doing so. This is the ONE service that *MUST* be stopped when performing deletions in the database as a safeguard. The rest is usually fine (though best practice is always "stop everything" on a cautionary side - though this is not often realistic).
Identifying & Fixing The Problem
For this part, you *WILL* need some SQL expertise. Or "if not you yourself", then you need someone who does (i.e. - your friendly DBA, who I assume exists).
I will show/give examples for SQL Server below.
For Oracle databases, the process is effectively the same - but you'll need to talk to your Oracle DBA about performing the trace. That would blow the scope of this document considerably.
WARNING:
Please be *CAREFUL* when performing these operations. You're deleting things directly in the database ... PLEASE make sure you have a full backup. That way, *IF* something goes wrong, you are not left up a creek without a paddle.
This isn't a "High Risk"-operation per-se ... but if things *CAN* go wrong (such as databases breaking because you've never made a backup), this sort of stuff DOES tend to happen.
Step 1 - Identify Affected Devices/Items
This part should be relatively simple, as you should be aware of what you can delete and what "re-pops".
If you fancy a "tabula rasa" approach, you can even try to delete *ALL* items (i.e. - "devices") and then press "F5" or refresh, and see what comes back. Since you're doing this (I hope) on a lab/test-server with a backup of the live database, there should be no problems here (other than potentially time - deleting 20,000 devices from a DB will take a long time). So - be reasonable with what you're trying to do versus what you need to achieve.
Usually, it's just 1-2 items that have orphaned data ... not everything.
Step 2 - Preparing the SQL Trace
After it's finished loading, SQL Profiler doesn't show much of anything - you need to start a trace to get somewhere.
![]()
After selecting this option, the first thing you're greeted with is very similar to a SQL Enterprise Manager login - please provide credentials for the SQL Server you're intending to trace (note that this is the SQL *SERVER* - not "the individual database" at this point!). Once you've entered your credentials, hit the "CONNECT"-button.
![]()
After a successful login, you will be greeted with a screen like this one:
![]()
One of the first things you should do is to go to the "EVENTS SELECTION"-tab where you should enable:
- Show All Events
- Show All Columns
Which is located in the lower right.
![]()
Enabling all these events/columns will give you - often necessary - additional options that are needed during troubleshooting.
Click on the COLUMN FILTERS-button - this allows you to filter "where" you're collecting data from in various regards - be it "by database", "by application" or even by CPU. In general, you're most likely interested in capturing "everything that goes to Database X" ... so you should select the DATABAENAME-section and enter the name of your intended DB to be traced in the LIKE-filter, like so:
![]()
When done, you can click on the "OK"-button to close that window.
Now - let's have a look at what we're looking at in regards to the other TRACE PROPERTIES ... specifically - what events we're going to trace. By default, you should see something like the following:
![]()
Here are the options you're generally interested in:
ERRORS AND WARNINGS
=> ErrorLog
=> EventLog
=> Execution Warnings
=> User Error Message
STORED PROCEDURES:
=> RPC: Completed
=> RPC: Starting
TSQL:
=> SQL: BatchCompleted
=> SQL: BatchStarting
You can right-click on the "SECURITY AUDIT"-section and use the "DESELECT EVENT CATEGORY"-option since we're not interested (usually) in tracking audit problems, but only SQL execution errors.
![]()
Realistically, the given settings are prone to a little overkill, but it's easier to filter through "too much logging" than to not log what you actually need.
NORMALLY, the exception you will be looking for is of the "USER ERROR MESSAGE" time, but it could be different, so I like to make sure / be on the paranoid side. To be honest - this isn't about "making an efficient SQL capture" - it's about capturing all the relevant SQL and then filtering out the irrelevant chaff.
IMPORTANT REMINDER:
Make sure you've covered all the steps from SECTION III - PREPARATIONS - as having unnecessary services running at the time of the trace will pollute / clog your SQL trace CONSIDERABLY!
And you're ready to start the trace now ...
IV.C - Step 3 - SQL-trace an attempted deletion of the device(-s) in question.
This is a straightforward part. Just switch to the application (often - the 32-bit console) that is causing you grief, and perform whatever operation doesn't perform successfully.
And then stop the trace ... by pressing the "red stop"-button at the top.
![]()
That's really all there is to this part - the "heavy lifting" comes in evaluating the trace.
IV.D - Step 4 - Evaluating The SQL-trace.
It's not all bad news, however. As a general rule "normal" operations / successful operations are colored in *BLACK* - while exceptions & errors show up as a nice, distinctive *RED* amongst the text. You can also search for the keyword "Error" or "Failed", but depending on what you're dealing with (especially what data-strings you're dealing with) this can be a bit more of a pain.
Note that the "full line of SQL" (or full event description) of whatever line you've got highlighted is shown at the bottom. SQL Profiler does *NOT* show you the results of a SQL statement - it shows you only "what was run / what went wrong" ... in order to play with the actual SQL, you need to have a means to query the DB open (usually - query analyzer in SQL Enterprise Manager).
A couple of otherwise useful data columns tend to be:
- CPU => # of CPU milliseconds used for an operation. If there are some operations that stand out, it might be worth looking at those for potential I/O or re-indexing needs.
- READS / WRITES => can be useful
- DURATION => helps you identify straight up what SQL statement(-s) are clogging your DBMS up. Usually, a great way to identify what tables/views cause you problems.
In case you need to deal with truly large SQL traces (I've had some be 1,000+ MB), you can make your life a lot easier by simply saving the SQL trace into a database table
![]()
This allows you to then run simple SELECT-type queries against a DB-table (all of the data columns you traced are saved). It's a great way of determining "top usage" in terms of CPU % and Read/Write performance, should it be needed.
Now then - back to our actual SQL trace - often the ERROR-message(-s) will be towards the end of the trace (since usually, that's where the process fails and thus gives up, after running into an exception). So - let's look at our example here:
![]()
In this case, for instance, our error text reads as follows:
The DELETE statement conflicted with the REFERENCE constraint "R_DataProtClient". The conflict occurred in database "{DATABASE-NAME}", table "dbo.DataProtClient", column 'Computer_Idn'.
I'm highlighting 3 separate items here:
* "constraint "R_DataProtClient""
=> This is the constraint that caused the conflict / failure of the DELETE statement/transaction. In LANDesk, pretty much EVERY foreign relationship constraint is named in this format:
""
R_{table-name}
""
* "table "dbo.DataProtClient", column 'Computer_Idn'."
=> In case this will make your life easier, here's an alternate pointer at what table you need to look at (and what primary key is giving you grief).
* "{DATABASE-NAME}"
=> This you should know, since this is the database you're tracing. But, if you're having to trace several DB's for some reason, this helps you narrow things down.
There will be a DELETE statement before this error - this will incorporate any unique identifier that's being referenced (usually a COMPUTER_IDN) so you know what value to look for in the problematic table.
The next step is to then go and delete it.
IV.E- Step 5 - Finding & Deleting The Orphaned Data.
In *THIS* particular example, we've got a few troublesome computers (COMPUTER_IDN's 646, 647 and 648) who have failed to be deleted due to a foreign key/relationship constraint into the DATAPROTCLIENT-table. After running a select on the table, we can see that - indeed - the clients DO have entries in that table. Why this data has been orphaned, we can't say (as it doesn't show up in inventory), but this is what's causing us problems.
So - let's delete it ... let's look at our table:
![]()
Since we have found the troublesome table, and we know the troublesome COMPUTER_IDN's it's a simple case of a 1-line SQL statement to rid us of the relevant orphaned data.
delete from DataProtClient where COMPUTER_IDN in (646, 647, 648)
would sort this bit of orphaned data out.
NOTE:
It is possible that when you attempt to delete these devices, this may fail - again - due to another constraint.
In such an event, you need to follow this table constraint as well, and repeat the steps here - until you come to the "end branch" of the data, and then delete your way through the various tables in the hierarchy established.
Multi-level dependencies are NOT common with orphaned data (most cases it's just the one level) - I'm just highlighting the possibility of it as a "just in case".
Now - with this done ... let's TRY AGAIN .
IV.F - Step 6 - Try Again / You Should Be Done
Normally, you will just have a single instance of orphaned data to contend with.
In some, rare cases, you may have several - in which will require you to repeat the whole SQL tracing process (so that you know where / why the delete action failed this time). If you have more than 2 instances of orphaned data, I would strongly recommend having a chat with your DBA team about checking the consistency/health of your database, as things should be getting rather suspect at this point.
So in most cases, you would need to just do the following:
- *Stop* the LANDesk Inventory Service on the (live) Core
- Run your SQL delete statement(-s)
- Re-start the LANDesk Inventory Service on the (live) Core
- Delete the items / devices you had problems deleting before.
The last 2 steps are interchangeable - but the first two are *NOT*.
V - Things To Look Out For:
If you've followed the advice & guidance presented here, there's really not much to look out for / things that can be gotten wrong.
The biggest thing really is to "muck around" on a copy of the live DB in a safe environment - that removes the greatest element of risk in the equation.
Other than this, the only real key element to remember is to STOP the LANDesk Inventory Service before deleting anything.
VI - Conclusion:
The article should cover everything that's required to identify the problem, set up a SQL trace to identify the source/location of any orphaned data, and help in deleting the relevant entries, as well as a process to do so safely.