Configuration Management System (CMS) Best Practices Library


[PDF]Configuration Management System (CMS) Best Practices Library...

0 downloads 170 Views 702KB Size

Configuration Management System (CMS) Best Practices Library Provider Onboarding Revision 2, June 3, 2010

Planning and Design

1

Documentation Updates Table 1 Document Changes Chapter

Version

Changes

All

1

Initial document

2

Rev 2 updates – Jody Roberts

2

Contents Introduction................................................................................. 5 What is a CMS? ........................................................................................................................................ 5 How is CMS different from CMDB? ......................................................................................................... 5 Consumer/Owner/Provider Model.......................................................................................................... 5 Audience .................................................................................................................................................. 5 Prerequisites ............................................................................................................................................ 5 Chapters Summary .................................................................................................................................. 5 Related Documents ................................................................................................................................. 6

Onboarding Strategies ................................................................. 7 Consumer/Owner/Provider Model.......................................................................................................... 7 How the CMS Looks at Providers ............................................................................................................. 7

Core Onboarding Process ............................................................ 9 Step 1: Validate the Consumership ....................................................................................................... 10 Step 2: Identify the Ownership .............................................................................................................. 11 Step 3: Document the Data Requirements ............................................................................................ 12 Step 4: Identify and Document the Provider ......................................................................................... 12 Step 5: Rationalize the Provider ............................................................................................................ 13 Step 6: Map the two Data Models ......................................................................................................... 15 Reconciliation ................................................................................................................................. 16 Step 7: Identify the Conduit to be Used ................................................................................................ 17 Reconciliation for Specific CI Types ................................................................................................ 18 Step 8: Deploy the Conduit .................................................................................................................... 18 Step 9: Test and Validate the Provider Conduit .................................................................................... 18 Step 10: Modify, Test and Validate the Consumer Conduit .................................................................. 19 Step 11: MTP/Open up Provider for Consumption ............................................................................... 19

Discovery Onboarding ............................................................... 20 DDM Provides Multiple Ways to Get the Same Data, Plan Carefully .................................................... 20 Avoid Attribute Oscillation .................................................................................................................... 20 Discovery Rationalization ...................................................................................................................... 20

3

Provider Decommissioning .......................................................... 22 Triggers for Decommissioning Evaluation ............................................................................................. 22 Reference Process ................................................................................................................................. 22 Properly Archive .................................................................................................................................... 23

Index ....................................................................................... 24

4

Introduction What is a CMS? ITIL v3 calls for a supportive “Configuration Management System” (CMS) to manage configuration data and provide that configuration management information to consumers in a service context. Consumers are the applications, people, processes, and workflows that form IT Service Management.

How is CMS different from CMDB? There is a difference between simply federating data sources and creating a reliable CMS with measurable data integrity. This library of best practices is intended to provide a way to realize ROI on efficiencies and optimizations enabled by a successful CMS implementation. The need for IT to share a broader understanding of services is why ITIL version 3 has expanded configuration management beyond a “database” to a “system”.

Consumer/Owner/Provider Model One of the primary enablers of data integrity is by using the provider/owner/consumer model. The premise is based on maintaining a high standard of entry into the CMS. By properly validating each new MDR, consumers benefit by greater confidence in the data and are able to make better decisions. Of critical importance is that the processes are repeatable and scalable. The same process is used for both the initial deployment, scaling to production, as well as ongoing operation. By following these best practices, configuration managers can maintain and continually improve on the ability of the CMS to provide service-relevant, consistent, accurate, and timely configuration data to consumers.

Audience This document is intended for configuration management architects and users who are responsible for onboarding new attributes, CI types, and authoritative sources of record into the CMS. Consumers may also find this information useful to understand where their data is coming from and how it is retrieved, reconciled, and presented. This document is part of the deployment section because onboarding is done during deployment (as well as after). However, the same best practices are applicable to the ongoing, operational onboarding of new providers.

Prerequisites Business leaders should be familiar with basic ITIL and ITSM concepts. Technical staff planning to implement or design CMS components should also be familiar with CMDB and related technologies such as federation, reconciliation, and service modeling.

Chapters Summary The onboarding process is structured as follows: 5

Section 1 - Introduction provides an overview of this document and shows how it fits in the CMS Best Practices Library. Section 2 - Onboarding strategies, addresses the onboarding concepts and processes, focusing on provider rationalization, reconciliation and conflict avoidance, and maintaining a high standard of entry for onboarded data. The remaining sections address architectural and planning considerations for onboarding. The more technical best practices for implementing the provider is discussed in separate documents. Section 3 - Special Considerations for Discovery, discusses DDM as multiple providers and how to rationalize multiple DDM packages. Section 4 - Provider Decommissioning, discusses why, when, and how to deactivate providers.

Related Documents This document is part of the HP CMS Best Practices Library. The library is composed of three sections: Planning and Design, Implementation Guides, and Integrations and Deployment. Companion documents to this guide are the CMS Strategy Guide, and the Consumer Onboarding Guide. Other library documents may also reference this document.

6

Onboarding Strategies Consumer/Owner/Provider Model The Consumer/Owner/Provider model provides a strategic foundation on which to build a set of processes that become the governance around configuration data entering and exiting the CMS. The model calls for certain tenets to be followed which unfold into tactical rules and decisions for what should and should not be done when changing the state of the CMS, that is, adding provider data or altering the CMS consumership. If processes are created that follow these best practices, the net result is an increase of the quality of the configuration data in the CMS. Consumers of that data will have greater trust and confidence in the accuracy and integrity of that data, allowing more confident and trusted decisions to be made based on it. More successful decisions are manifested in ROI and business value that depend on those ITSM processes, and as a result the business is improved. The CMS Value Proposition For the below, provide a diagram that shows provider transparency, multiple provider capability, attributes of a single CI may have multiple providers, etc.

Consumer/Owner/Provider Relationship in the CMS For more details, refer to the CMS Strategy Guide which is a separate document in the CMS Best Practices Library.

How the CMS Looks at Providers Conduits CMS looks at providers through conduits. A conduit is simply a mechanism for physically and logically connecting the CMS and its providers and exposing and transporting configuration data from the provider to the CMS. Conduits vary in their function and structure, for example, a federation adapter vs. an ETL program like Connect-It. However, all providers are owned, serve a purpose in IT and host business data which is created by a process, for example, the Incident Submission process.

7

Authoritative Sources of Record All MDR providers should be Authorized sources of record which are unique in scope and business accountability for the data they provide. More details are discussed later in this guide in the Provider Rationalization section. No nonauthoritative sources should be connected to the CMS, to maximize quality of and confidence in the consumed data.

Provider Hierarchy Providers are typically applications, databases, data warehouses, some type of repository. While the physical characteristics can vary greatly, all providers can be mapped to a simple hierarchy of attributes, CI types and relationships, and Providers. A simple illustration of this hierarchy is shown below:

Provider Structural hierarchy

Configuration Data in Providers Provider data is organized into CI Types. For example, the provider “Service Desk” provides CI types called “Incidents”, “Problems”, and “Change Requests”. Each CI type is composed of one or more attributes. For example, the Incident CI type could include an attribute called “Description”. It can be argued that some configuration data is not a configuration item, for example, an Incident. This is not a question of CMS understanding or education, but of the conscious decision to make Incidents part of the CMS. HP’s position is unbiased; meaning, whether or not an Incident should or should not be considered a CI type is much less important than the capability to provide incident data to the CMS in a service context and with a conscious decision made by the IT organization on what should be managed and controlled by configuration management.

Relationships in Providers Relationships may be present implicitly or explicitly. Implied relationships exist as logically related data which could be matched programmatically. Explicit relationships exist as data which references other data. An example of implicit relationships could be two CIs called “Incident” and “Node”. An attribute of Incident is called “Node name”. Values of “Node name” in Incident CI types are logically equivalent to values in the “Node” CI type’s attribute “host name”. These relationships can be created in the provider conduit, or added later with UCMDB enrichments. Explicit relationships can exist as other names, for example, lookup tables or dereferencing tables. Records in such tables refer to key values in other tables. For example, a provider may have a table called “Incident attached Nodes”, which list in one column the current list of open incidents, and in another column, the servers related to those incidents, ostensibly, servers which have open incidents pending resolution.

8

Core Onboarding Process Armed with an understanding of this structure, we can now assert the basic process of how to consistently onboard a provider. The same process is followed whether it is a completely new provider, or if additional CI types are being onboarded from an existing provider, or if additional attributes or relationships are being onboarded for existing CI types in an existing provider. As you go up the provider hierarchy, you simply do more things to onboard. This establishes a way to consistently onboard attributes: from the very first attribute in initial deployment of the CMS, all the way to operational maintenance, the process is the same. Here is an example of the process steps to onboard a provider. These steps can be followed at whatever level is required to onboard new attributes, CI types, or providers. Anything already done for that provider can be skipped.

Basic Provider Onboarding Process This diagram is not to be regarded as a rigorous process. The processes suggested by it, as implemented, should be resolute and unyielding on the points of rationalization, documentation, and operational requirements for the provider. However, how this is accomplished – what the final processes look like -- will depend on each organization. Let’s step through the process at the next level of detail. The list and sections for each list item below expand on the basic concepts of the consumer / owner / provider model, and provide rationale and implementations suggestions for each one.

A More Detailed Provider Onboarding Process 1.

Validate the Consumership

2.

Identify the Ownership 9

3.

Document the Data Requirements

4.

Identify and Document the Provider

5.

Rationalize the Provider

6.

Map the two data models

7.

Identify the Conduit to be used

8.

Deploy the Conduit

9.

Test and Validate the Provider Conduit

10. Test and Validate the Consumer Conduit 11. Move to Production/Open up the Provider for Consumption

Step 1: Validate the Consumership The trigger event for initiating provider onboarding is a change in consumership. If a new consumer is onboarded, or if an existing consumer requires additional configuration data, the provider onboarding process starts. There is a process modeling part, and a data model part. Ideally, the process of provider onboarding should be established before beginning the data modeling process. However, the initial “bootstrap” onboarding will necessarily involve some degree of iteration between the two until the major problems are addressed, then can diverge properly into their respective layers.

Consumer Inputs Consumership should be validated in the Consumer onboarding process. Consumers should be ready with the following data: Authorization to consume: Whatever proof required by the organization to ensure that the consumer is in fact valid and in need of the specified configuration data Data Requirements: What CI types, attributes, and relationships are needed from the provider? Service Level Requirements: Latency, accuracy, and availability tolerances/limits area just a few examples. While the limits may not be able to be accommodated, it is necessary to understand in case providership may need to be altered, given sufficient justification. Conduit preferences: How the consumer wishes to consume. This may or may not be revealed to be appropriate after the process is followed. However, if less work is required to onboard with the preferred or another conduit, the configuration manager should work with the consumer to decide the final choice which is best for both the consumer and the configuration management process owner. Regulation: Conduits are commonly shaped by legal and regulatory boundaries. For example, air-gap (physically isolated) data centers, Sarbanes-Oxley (SOX) compliance, and the Payment Card Industry Data Security Standard (PCI-DSS). Regulation overrides any best practice, such as entering in data manually in an air-gap data center, if no other means are possible or allowed. In no way should any recommendations or practices herein be construed or used as an attempt or opportunity to circumvent any law. Fortunately, many regulations are contain language for risk mitigations, such as compensating controls, or are sufficiently general as to allow for many implementations or interpretations. For example, if a regulatory element forbids a single point of failure for a critical system, this leaves much room for interpretation, which can be problematic without further precedent and/or clarification. What is “critical” (this language must be accurately defined)? Would simply duplicating a system containing a single point of failure into a DR facility comply (what are the service levels agreements in place)? 10

Consumer Non-Inputs The following should NOT be required or considered: Which Provider/Own Provider: The provider should be transparent to the consumer. This is not to say that the provider must be “kept secret” from the consumer, or that the consumer may know who the provider is already, or that the consumer may help the configuration manager identify the proper ASOR. Rather, once the provider is onboarded, the consumer conduit should function independently of the provider conduit(s). In particular, the consumer should not be allowed to specify a preferred “provider” who is not already a provider and who would provide conflicting data if onboarded. For example, if a consumer says, “I want to consume server data from the CMS, and I want it to come from Bill’s Spreadsheet, because I trust Bill, so I want you to onboard it as a provider.”, then that request should probably be declined. However if Bill’s spreadsheet is properly rationalized and authorized as a source of record (made an ASOR), and Bill’s spreadsheet contains a unique attribute that is not already provided in the CMS, it could be onboarded. The best practice is, the consumer must never be allowed to cause circumvention of the provider onboarding process, irrespective of the prominence or authority of the consumer. Doing so will create a precedent which will become more and more problematic to defend against, and damage credibility and strength of the onboarding process part that maintains high data integrity for the consumer – including the hypothetical consumer above which requested the exception in the first place.

Step 2: Identify the Ownership Apart from the provider, the Ownership of the provided data must be established and maintained from the provider through the conduit. In the case where the provider supplies data which resides in the CMDB itself (Core or non-external data), ownership extends from the provider into the CMDB. This Ownership is not optional and must be maintained by process and governance by the configuration manager. Unowned data cannot be authoritative data because the chain of business accountability would be compromised. The Consumer/Owner/Provider model (or simply COP model for short) is described in detail in the CMS Strategy Guide in the section titled “Process governance”. It describes the differences between the Owner and the Provider, excerpted here:

Owner/Provider Differentiation The Owner and Provider differ by perspective, as the table below shows: Perspective

Owner

Provider

Entity

Person, Role, or Group

Application

CMS

Authoritative Source of Record

MDR

Consumer

Business Accountability

Transparent

Quality

Accuracy, Currency

Performance&Availability

Data

Logical

Physical

Security

Integrity, Audit

Access, History

CMDB

Contract to expose the data

Responsible for the data itself

11

For example, from a Quality perspective, the owner is concerned about how closely the configuration data matches the actual state of the business or ITSM, while the Provider’s perspective of Quality is around servicing queries quickly and uptime of the application.

Step 3: Document the Data Requirements Once the consumer is valid and has supplied their requirements, the data is evaluated by Configuration Management to determine how to meet the consumer’s needs. The requirements data should be detailed enough to identify which attributes, CI types, and relationships will be consumed, and the expected volume of consumption. For example, a good consumer requirement could be similar to the following: CI type: server (this will be translated to “node” in the CMS) Consumed Attributes: Host name Host OS type Number and speed of CPUs Amount of Memory installed Amount of Disk installed List of Applications served by this server Expected Volume: All production servers, once per week CI Type: Application (this could be translated to “BusinessApplication” in the CMS) Consumed Attributes: Application Name Business Service(s) served by this application Criticality Tier Expected Volume: All production applications, once per week, or whenever applications are changed

Identifying Need for New Data or Provider Once the data requirements are understood, the configuration manager can compare the requirements to the current providership of the CMS and determine which if any attributes are not already present in the CMS. If any attributes are not already present, the provider onboarding process continues. If all attributes are already present, the process stops and the consumer onboarding process continues.

Existing Providers If part of the data is already provided by an existing provider, and the new requirements can be fulfilled by the same provider, the obvious choice is to onboard the new data from the existing provider. Sections 4 and 5 cover adding CI types and attributes to existing providers.

New Providers If no existing provider currently provides the required data, a new provider must be identified and onboarded. Section 3 covers this process.

Step 4: Identify and Document the Provider To identify new providers, knowledge of the organization’s business and IT infrastructure must be available and understood. The configuration manager should drive this sub-process, but may rely on others for proper identification of 12

who should be the provider. The candidate provider must pass through the rationalization process to become a provider. If the candidate for some reason does not qualify, either the candidate must be remediated to become authoritative, or another provider identified. The provider should be documented at least with the following information: -

Name and Identity of Provider (for example, the Asset Management System, which is HP Asset Manager) Provider Administrative Contact information Owner Contact Information (see above for who the owner is compared to who the provider is) Which ITSM process this provider represents Which types of CIs, attributes, and relationships are present in the provider If relationships are implicit or explicit in the provider data, if necessary How the provider will expose the data: For examples, direct database access, API, Web Services, JDBC, etc. Any limitations the provider has, such as, availability/downtime, or service level capabilities

Step 5: Rationalize the Provider Whether the provider is a completely new data source or is being extended, the provided data must be rationalized to become authoritative. There are several tests to determine if the provided data is in fact authoritative.

Establish and Enforce a Policy of Unique Providership It is understood that many IT organizations and applications are in apparent conflict with this single-source law. The assumption that operational conflict is “normal” and should be resolved with expensive reconciliation “engines” is misplaced. Reasons why multiple conflicting providers are often present is described in more detail in the Consumer/Owner/Provider Model description. The specific process may vary depending on the organization’s needs, but the required outcome must be that the provider is not onboarded unless the data is uniquely authoritative. Data provided by two sources, within the same scope, at the same time, is considered operational conflict and must be avoided. This is not in conflict with the “weak typing” concept in the BTO Data Model (BDM), because that data will resolve to specific CI’s in the consumer.

Expect and Prepare for Overcoming Cultural and Political Barriers Cultural and Political issues may need to be resolved to successfully execute provider rationalization. A very old best practice is relevant here: remember, the best of intentions and engineering is no match for the unpredictability of human behavior. Ownership transition, application remediation, and other disruptive events may need to occur to consolidate providers, establish better controls around maintaining data in providers such as change control, asset management, and project/demand management. It is important to note that when setting expectations for timeframes of change, this kind of change is often difficult to document and describe; it exists nonetheless. Plan to take some time to explain these barriers as clearly as you can to the project stakeholders.

Use Business Accountability to Test/Establish Provider Authority If two provider candidates are in conflict and are apparently equal in all other ways, the correct provider will be the one who is accountable from a business perspective for maintaining the accurate and responsible records. One source or the other will invariably be found to be a copy or subset of the other, or will differ in exact scope.

13

Identifying Provider Conflict Conflict exists only when two providers conflict in both Class Model and Instance Scope. If there is model overlap but no scope overlap, then no conflict exists. If there is scope overlap, but no model overlap, then there is a data modeling inconsistency, but no conflict exists technically. The following table illustrates the scope and model conflict relationship: Scenario

Example

Model overlap with no scope overlap

Example 1:

No conflict

Provider A attribute X for data center 1 Provider B attribute X for data center 2 Example 2: DDM Pattern A discovers attribute “OS type” for Windows servers, using WMI DDM Pattern B discovers attribute “OS type” for UNIX servers, using SSH

Scope overlap with no model overlap

Example 1:

Model inconsistency

Provider A attribute X equates to “OS type” Provider B attribute Y equates to “OS type” Example 2: Provider A federates attribute X from Service Manger OOTB Provider B federates attribute X_custom from inhouse Service Desk in custom DDM Pattern

Scope and model overlap

Example 1:

Conflict

Provider A attribute X equates to “OS type” Provider B attribute X equates to “OS type” Example 2: DDM pattern A discovers attribute “OS type” on Unix servers using SNMP DDM pattern B discovers attribute “OS type” on Unix servers using SSH

Provider Scope and Model Conflict In other words, two providers can provide the same CI types or attributes, if they are authoritative for a unique scope of that data. For example; two discovery systems discover the exact same CI types and attributes, but from two different data centers.

Choosing the Correct Provider The business accountability test is a valid way of rationalizing provider authority, because of the nature of accounting principles. This is known as the “two-clocks” problem. If you have one clock in your home, you always know what time it is. With two clocks, you may never be quite sure. If clock 1 is known to be accurate, then why have clock 2, except to simply reflect clock 1’s time in another room? 14

The same applies to the business perspective. Consider how the CIO’s or CTO’s perspective on IT. When a report is needed, that report is generated based on data which is the truly accountable source, not just for accuracy and integrity, but because it is a waste of time to provide two reports: Which one is correct? Anyone with enough experience to attain a CxO position would find unacceptable a report which said something to the effect of, “it could be x, but it may be y”. However, formative experiences sometimes tend to entrench one in positions which are not easily recognizable as being off center. If you are not skeptical of the single-provider principle, someone around you probably will be. The challenge is the assumption that multiple providers are really common. Ultimately, you will find that most every competing duplicate provider falls away, if you properly follow the rationalization process. “Obvious” conflicts, when resolved to the attribute level and given the scope, will turn out to be one of the following: 

One source is a copy of the other (performance / security)



One source does duplicate discovery / integration / etc. but is not authoritative (silo apps)



The wrong person was asked if a source was authoritative. For example, a source is considered authoritative only by users of that source (silo apps) or other arbitrary subset of users.



The wrong source is already authoritative – for example, the “official” list is difficult to update, so “unofficial” lists develop as circumventions to process – this should be remediated first and should not be made a CMS problem.



The right source is not authoritative – inverse of the above situation. One source, for example, may have more timely, but less authoritative data (unofficial but should be considered as being transitioned to authoritative – ownership transition)



The “conflict” is identity-based (the “conflicting” data is reconciliation or key-matching data)

If you have rationalized a provider, and still have a valid exception (another provider conflicts), please contact your local UCMDB HP representative. The Best Practice here is not to forbid conflicts, but to avoid them via the following assertions: 

True conflicts are very rare but may exist



Complex heuristics or weighting algorithms cannot be substituted for business logic – it simply postpones and multiplies the work from doing it one time architecturally to doing it constantly operationally.



Most conflicts will be resolved naturally at the attribute level. If two providers supply the same CI but differ by certain attributes, only those attributes need be provided.

Authorized Replication Becomes Part of the Conduit As mentioned elsewhere, the many forms of replication (warehousing, ETL, export/import, etc.) is not a conflict. Consider replication in the analogy above as clock 2, not displaying its own time but simply reflecting clock 1’s time in another room. Provided the replication is authorized, has an owner, and complies with the other tenets of the COP model, replication simply becomes a part of the consumer conduit.

Step 6: Map the two Data Models Once the provider has been identified as authoritative, the actual onboarding process may proceed. The consumer requirements are understood, the provider(s) to fulfill the consumption have been identified by their capabilities and authoritativeness, and the UCMDB BTO Data Model (BDM) is already understood.

15

Reconciliation Establishing Identity The first type of mapping between the models is to establish identity of the data. That is, how the conduit will select the correct data for the consumer, and match the data with the corresponding core data if present in the CMDB. This is collectively known as reconciliation. The method of reconciliation differs by conduit, but all conduits have a way of reconciliation. These are listed in the examples in the next section.

Single vs. Multiple Attribute Reconciliation Reconciliation is preferred to be done with “hard” or unique keys, but this is not always possible. One technique is called “preconciliation”, or “foreign key population”, where the UCMDB unique ID is updated into the provider, then used at query time for simple key matching. For HP products this is not usually a problem. For non-HP products, this may or may not be practical. Note that there must still be an algorithm to match the provider records with the identity of the CI’s in UCMDB, so the reconciliation cannot be “cheated” in this manner: it simply provides a faster query at consumer query time. When identity must be established using multiple keys, the conduits capabilities will be a factor. For example, UCMDB federation adapters have the ability to specify as many keys as are required to reconcile identity, and use regular expressions to ensure data exactness does not cause the logical matching attempt to fail. For example, a node name may be different between a provider and the CMDB by the FQDN vs. short names. A regular expression can be used to ensure equivalence. Another example is to normalize the case of strings so differences in capitalization do not cause reconciliation to fail. Details and capabilities of each conduit are discussed in the product documentation, including the UCMDB documentation. The next step is to map the two data models. The mappings may be simple or complex.

Simple Mapping Simple mappings approximate a 1-to-1 equivalence between two attributes of the same data type. For example, “host name” in a provider may simply map to “node name” in BDM.

Complex Mapping Complex mapping can involve several types of complexities: 

One-to-many or many-to-one splitting or aggregation/concatenation of attribute values. For example, a provider’s data model may contain three attributes, “first name”, “middle names”, “surname”, to a single CMS attribute, “entire name”. The inverse can also apply, where a single provider attribute value may need to be split into multiple parts using some programmatic method such as regular expressions or string manipulation.



Data type casting, such as converting an “int” to a “string”.



Data value conversion or transformation. For example, conversion of English units to SI units, or Watts to Ergs.



CI child or parent mapping, such as summing all the CPU speeds of a server into a single total CPU capacity for that server.



Trimming or padding different data lengths, such as truncating a 256-character name to a 128-character name. Note that some conversions can result in a loss of data. These situations should be avoided if it would impact the quality of the decision made by the consumer.

16

Various conduits have differing capabilities and methods of complex mapping, which are discussed in more detail in the next section.

Step 7: Identify the Conduit to be Used Once the logical mappings between the two data models are completed, the correct conduit can be chosen, based on the conduit’s capabilities and the consumer’s needs. Here are examples of conduits which can be used by UCMDB:

Conduit

RealTime

Provider Connectivity

Configuration Method

Reconciliation Method

Generic Federation Adapter Kit

Y

Any type of database via JDBC

UCMDB UI and XML

data attribute matching, with extensions possible by writing java code

Java Federation API / SDK

Y

Any supported by Java

Implemented by client – possibly java code, xml, registry, any.

data attribute matching, with extensions possible by writing java code

RMI Adapter

N

UCMDB-toUCMDB

UCMDB UI and TQL

key-matching

DDM Pattern

N

Any

Jython scripts

Matching values in existing core CI attributes

Java API

Y

Any

Any Implementable by Java

Matching values in existing core CI attributes. Can also reconcile via external sources

Web Services (via DDM)

Y

Web Services

Jython scripts

same as DDM

ETL such as ConnectIt

N

Any implementable by a Connect-It scenario

Connect-It UI, XML

Reconciliation rules set up in VBScript via Connect-It UI, exist in Connect-It Scenarios.

UCMDB UI and Web Service provider functions (insert CI)

Y

HTTP(S)

N/A

Manual

17

Reconciliation for Specific CI Types Hosts/Nodes In UCMDB version 8 and earlier, for host (or “node” as they are called in BDM) CI’s, reconciliation is based on the host key, which must follow one of two forms, called “weak” and “strong” or “complete” keys, respectively. Weak host keys have the form of an IP address, a single space (ASCII 20 or “blank”), and a UCMDB domain name. For example, a weak host key could be “15.14.13.250 DefaultProbe”. Strong/Complete host keys take the form of a MAC address, objectively. Subjectively, the lowest MAC of all the network interfaces installed in the hardware. These key forms work, but can be problematic for certain situations, such as, when an interface card is exchanged in an existing server. UCMDB versions later than version 8 will have a different way of reconciling hosts. These best practices will be updated to reflect these changes in the future.

Non-hosts All other CI types in BDM are reconciled based on matching attribute values. All CI types in UCMDB have an attribute named “data_name” which is typically the “logical” name or identity of the CI. This value should normally be used for reconciling data where unique foreign keys are not present in the provider.

Step 8: Deploy the Conduit Usage and implementation techniques for all these conduits are described in the UCMDB Product documentation. The final choice of conduit should be based on the consumer needs, availability of domain expertise, and previously implemented or reusable conduits. Preparation for creating and testing the conduit include: Obtain authorization to access the provider source. This is typically a user id and password, key or certificate, etc. depending on the conduit. Obtain permissions if necessary to access the specific provider data. This could be in the form of database permissions, application security roles/rules, or similar internal security. How to access the provider. This could be as simple as a server name or URL, or as complex as installing a client on the DDM probe, or modifying the DDM probe to access a provider’s API via .jar or .dll files.

Step 9: Test and Validate the Provider Conduit As the conduit is deployed, testing should occur as soon and as often as possible. This technique is collectively known as the “agile” approach, and is based on incremental development, merged development/testing, and building success in small steps, allowing progress to be demonstrated earlier and more easily, and greatly decreasing the debugging phase at the end. The following steps are an approximation of what to expect, and how to realize the earliest success and spend the least amount of time troubleshooting. Connectivity Testing: Test connectivity to the provider with an external tool. For example, if the provider is a database, use a database query tool to connect to the database using the same credentials and other connectivity information which will be used by the conduit. Query Testing: Develop the query by testing the same query to be used by the conduit. This will again vary widely, and an exact test query may not be feasible, for example, if you are using Connect-It and are selecting a large number of attributes in the Connect-It UI, you may not want to create the same SQL query to test with. However, 18

as much query testing as you can do will help expedite the process. Important! It is also recommended to build out queries from simple to complex. For example, if your final query will involve JOINs or other complexities, start with a simple query of SELECT * FROM , then when that works, filter with a SELECT from
, then add a WHERE clause until that works, etc. It is easier to debug complex queries if you can test each piece incrementally, vs. decomposing the final large query into possible areas where a problem exists. Reconciliation Testing: When the query is working, begin evaluation of the reconciliation mechanism. Examine the key values, foreign keys/parent keys, etc. for exact matching into the UCMDB keys. Mapping Testing: At this point, you may be able to move some of the functionality into the conduit and start testing. Ensure the conduit functions exactly as the earlier testing steps. It is recommended to start with a single, simple attribute mapping. Test the conduit using this single attribute until it functions correctly. Then, add the remaining simple mappings. If you have enough experience and expertise to add many at once, with confidence that most or all of the added mappings will work, by all means do so. With all the simple mappings working, then add a single complex mapping, and get that working. Repeat for all similar complex mappings. Then, move on to the next type of complexity, and repeat for all complex mappings until the entire providership is complete. Outlier Testing and Exception Handling: The bounds and ranges of data in the provider should be well-understood if the conduit is to function reliably. As much as is practical, test the conduit with “bad” data, outlier/bounds testing, and ensure that the conduit components handle the invalid conditions as gracefully as possible. Stub Testing using UCMDB itself: Provider conduits can be tested using the UCMDB UI, without initially involving any specific consumer conduit. Parallel TQL testing development: For example, federation adapters can be tested using test TQLs which contain the federated/external CI types. Consumers often (but not always) consume via TQLs, so the test TQL can be developed in parallel with the provider conduit. Once the TQL delivers the correct result, it can then be immediately used by the consumer conduit.

Step 10: Modify, Test and Validate the Consumer Conduit Once as much testing as is practical is successful using the UCMDB UI, the consumer conduit can be modified to use the newly provided configuration management data. The exact steps will vary by the type of consumer conduit and the nature of the actual consumer (what is at the other end of the consumer conduit), and may require the consumer to participate in the testing. The process for this should be set up and part of the Consumer Onboarding process.

Step 11: MTP/Open up Provider for Consumption Once the consumer conduit has been tested and verified, the consumer may begin consumption of the newly provided data. A formal process should have previously been established. This process should now be followed to move the tested conduit to Production status. The organization should follow standard Change Control processes to plan, initiate, execute, and document the staging to production. The process will likely involve recreating or redeploying the conduits from test to production, or may be simply staged “in-place” for initial/green field deployments.

19

Discovery Onboarding There are special considerations for onboarding Discovery (DDM if using HP) compared to other providers.

DDM Provides Multiple Ways to Get the Same Data, Plan Carefully Discovery is itself multiple providers, can use multiple protocols, can be customized to populate custom attributes, and is highly flexible to accommodate the often unpredictable and widely varying business and technical environments and policies. Because of DDM’s versatility, however, it is possible to create conflicting discoveries, where scope and protocols overlap. For example: Discovering the same CI type on the same set of servers using two different protocols.

Avoid Attribute Oscillation This can result in attribute oscillation, or “flip-flop”, where a value changes based on the last discovery to run. If this attribute is set to be flagged for change tracking, each discovery iteration can generate false changes and cause problems downstream, for example, if you are doing closed-loop Change Management to detect unplanned changes.

Discovery Rationalization Although much is the same for any provider, but it is easy to overlook the rationalization discipline because discovery is usually the first provider, and may be mistakenly given process circumvention because it may be misperceived as being “ownerless” or “special” in authority – this is a mistake. You should subject DDM or any discovery to the same stringent process and high standards used to onboard any other provider according to the Consumer/Owner/Provider Model. Discovery is often (and should be) the ASOR for basic host, host resource, and relationship data in the CMS. A useful tool to identify and resolve discovery scope overlap is a spreadsheet which lists the CI types vs. the protocols and patterns used to discover them. Here is an actual Excel spreadsheet used for this purpose.

20

Note the Conflicts have been identified in red and resolved conflicts in green. Use whatever scheme meets your needs. “Weak Host” refers to the keys of the host CI type in UCMDB, and implies there is a conflict when weak keys are created but not strong keys. Resolve best by removing one or the other from scope, but if necessary, resolve temporally, e.g., one always before the other – if the “conflict” is key information this is not a problem.

21

Provider Decommissioning Provider decommissioning is easy to overlook in favor of effort required for new or current projects. However, providers who no longer have any consumer could introduce security risks, waste resources, and complicate maintenance and add to upgradability effort. An efficient process for decommissioning is both a best practice and should be a requirement for security and business risk reduction.

Triggers for Decommissioning Evaluation The process to evaluate a provider for decommissioning should be triggered by any event which affects ongoing providership, including: 

When there is no longer any consumership of that provider, and none is expected in the near term



Provider is superseded by a new provider



Change in business accountability for the data provided by a provider

On the consumer side, it is a best practice to trigger an evaluation of provider decommissioning as part of the consumer decommissioning process. In practice, whichever side initiates the decommissioning process does not matter, as long as it fits into your organization’s structure and overall way of doing things.

Reference Process The following steps serve as a best practice as a basis for creating an actual practice that meets the needs of individual organizations. Once the decommissioning trigger even has been rationalized (verified that a provider in fact should be decommissioned): Security: the Conduit’s access to the Provider should be revoked. If the conduit shares a common userid with other nonCMDB consumers, such as report-writing applications or data warehousing, then the password should be changed for all consumers using that userid. This protects the provider from any culpability of misuse of “stale” credentials used by the former conduit. Conduit Deactivation: The consumer conduit should be removed. This may include 

Removing the federation adapter or other conduit representation of the provider, in the UCMDB.



Removing wrapper.jar from the DDM probe’s CLASSPATH



Removing a java client .jar



Cleanup of folders created to hold extract files



Removing any ETL or other intermediary constructs, such as Connect-It scenarios and schedules



Removal and Archiving any TQLs, views, enrichments, impact analysis, reports, enumerations, classes, or links from UCMDB and DDM, which are used only by that consumer



Removal and Archiving any DDM patterns used only by that consumer

Remove Provider CI types: Removing external CI types in the CMDB which are specific to that provider Remove Stale CI’s: Removing CI’s replicated from that provider, if appropriate Remove Query and Analytics Content: Archiving TQLs, Views, enrichments, or correlation rules used specifically for that provider 22

Properly Archive UCMDB/DDM Package Archival: Any DDM or UCMDB constructs removed (such as, items 3g or 3h above) should be exported to a package for ease of archival. The packages should be removed (if present) from the UCMDB customer_packages folder in the UCMDB server. Properly archive: The decommissioning event should be documented in whatever appropriate journaling or archiving facility is appropriate for that organization. At least a “paper trail” should be maintained and filed for future reference. Decommissioning should be considered a memorialized event and treated as part of the company history. Feedback Loop for Continual Service Improvement: Any valuable experience, lessons learned, or how-to knowledge created for or by the original provider onboarding, in terms of CMS, should be extracted and preserved by the Configuration Management group, for Continual Service Improvement purposes.

23

Index Attribute, 14, 18, 23

Integration, 6

Audit, 14

Integrity, 14

Authoritative Source of Record, 5, 13, 23

IT Information Library, 4, 5

Authority, 16

IT Service Management, 5, 7, 14, 15

Availability, 14

Managed Data Repository, 13

chapters

Onboarding, 1, 6, 7, 10, 11, 22, 23

summary, 5

owner, 5, 10, 11, 14, 15, 18

Conduit, 8, 11, 18, 19, 21, 22, 25

passport registration, 2

Configuration Item, 5, 7, 8, 9, 10, 11, 14, 15, 17, 18, 19, 20, 21, 22, 23, 25

Performance, 14

Configuration Management, 1, 4, 14, 26 Configuration Management Database, 5, 13, 14, 18, 25 Configuration Management System, 1, 4, 5, 6, 7, 8, 10, 13, 14, 17, 19, 23, 26 configuration manager, 5, 11, 13, 14, 15 Conflict, 16, 17 Consumer, 4, 5, 6, 7, 11, 13, 15, 22, 23 Consumer/Owner/Provider Model, 18 Data Model, 5, 6, 7, 13, 15, 16, 17, 18, 23 document changes, 2 documentation updates, 2

prerequisites, 5 Process, 10, 11, 25 provider, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26 Quality, 14 Rationalization, 8, 11, 15, 23 related documents, 6 Relationships, 9 Replication, 18 Security, 11, 14, 25 service, 4, 5, 8, 12, 15

Federation, 19

Service Asset and Configuration Management, 4, 8, 11, 14, 16, 20, 26

HP Discovery and Dependency Mapping, 6, 16, 17, 20, 21, 23, 25, 26

Transparent, 13

HP Universal CMDB, 9, 18, 19, 20, 21, 22, 25, 26

updates to doc, 2

Implementation, 6

24