Topical Parameterization, Part I: Individual Parameters

This is the first of three pages on the topic of parameterization, dealing mostly with the conceptual properties of individual parameters.  The second page page deals with sets of topical parameters, while the third page discusses certain practices for their evaluation.

My Standard Caution About Abstractions applies to this page.

Introduction

In this context, parameterization is the practice of identifying one or more ways to quantify a topic1.  We use it in service of two distinct purposes:

(1) Describing something that exists

(2) Prescribing something we want or need

Both purposes require inputs of some form, based on technical work done elsewhere.  The technical details are usually (but not always) established by a Subject Matter Expert2 rather than by a System Engineer, who often3 operates as no more than a moderator, coordinator, or honest broker.

Basics

Parameters are identified descriptively when simply characterizing the way things are.  The dingus to be described already exists, and some information about it is available4.   For the sake of convenience, descriptive parameterization can be divided into two prototypical practices:

– In a pure white box situation, the analyst has information about the dingus’ composition.  That information can be used to build a model (e.g., a simulation) implementing cause-and-effect in order to characterize it under some given circumstance.

– The pure black box situation is the philosophical opposite: no information is available about what the dingus is or does, and we have to rely solely on information about externally observable relationships with the rest of the observable universe.  Internal cause-and-effect relationships cannot be modeled; models can do no more than emulate the dingus.

In real-world development, it is not unusual to use some of both practices when parameterizing the dingus.

Topics are parameterized prescriptively when requirements are being written.  They represent things the way we wish they’d be.  In most cases, the values for these parameters are prospective5, and might require adjustment or compromise during any responding development project.  Prescriptive parameterization practices can look deceptively like the black box concept of descriptive practices, but their prospective nature makes them qualitatively different.

Qualitative Properties of Good Parameters

Considered by itself, each topical parameter should offer three basic properties:

(1) Relevance6

Parameters are relevant when we can show that a “vertical” relationship exists between the layers of the dingus’ decomposition.  Such relationships are often, but not always, expressed as nested sets of equations, tabulations, or some combination of the two.  Sometimes, they can be expressed only in prose.  The vertical relationship is necessary in order to be explicit about how changes in parameter value propagate up and down the hierarchy.  When dealing with “required” values, certain aspects of this concept are sometimes referred to as “traceability“.

Near the top of a prescriptive hierarchy, superior parameters are often:

a) those abstracted from Operational Requirements or formally established operational doctrine

b) those abstracted from General Specifications assigned by the Statement of Work

c) those abstracted fromthe Regulatory environment

d) intrinsic to the approved conceptual design and, therefore, to technologies inherent therein7

Some parameters are “technology specific”.  The use of such parameters when prescribing requirements is generally (but not always) deprecated: to the extent feasible, requirements are most flexibly stated as specific to the need, not to the implementation.

Sometimes (however) it isn’t possible to parameterize prescriptively without being technology-specific.  Legacy practices explicitly approved (authenticated) development requirements at PDR, so that developer and customer had an opportunity to concur on a given CI‘s requirement topics,  parameterizations and values8 in the same administrative venue where we agreed to restrict the technical approaches to be used in the design.  For example, significant changes in parameterization occurred when digital technologies were introduced to replace analog devices, causing us to change the way certain requirements were parameterized9.

When defining these relationships, validated models are preferable to speculative models.  Speculative relationships10 require explicit tasking for validation and bi-directional traceability with their associated parameters.  Failure of that project will threaten the validity of not only the requirements at hand, but all peer and subordinate parameters that were predicated on the (failed) validation.  The use of speculative models when parameterizing or evaluating requirements constitutes a significant program risk, which should be formally recognized and managed.  Such usage often suggests low Technology Readiness Level.

Unfortunately, when trying to incorporate radical changes in technology11, there will often be no validated option available to establish relevance.  In that case, speculative models must be relied on to derive requirements.

(2) Observability12

Parameters that can be evaluated by more than one (valid!) technique are better than those accessible through just one, suggesting13 more mature technology, and more reliable verification.  Evaluation techniques that include both test and analysis can be more robustly verified than those having only one of those two general methods:

– Analysis ensures that we can reasonably assess sensitivity to as-built condition, decoupling our assessment of the design from the selection of any specific UUT.  It also allows variation of circumstances at will, independent of our ability to simulate them in a design-specific, physical test setup, permitting cost effective assessment.

– Test ensures that all characteristics of the UUT are present, decoupling us from the perspicacity of the analysis community14.  Unlike analysis, test permits the identification and verification of unintended functionality and unsuspected sensitivities15.

A hardheaded approach to the combination and coordination of both test and analysis for any given dingus parameter is a useful hallmark of a good System Engineer.

(3) Contrast16

Two notions of contrast are important:

(a) The in-tolerance range of high-contrast parameters is a significant fraction of the nominal value.  Sometimes, it is useful to normalize (or otherwise non-dimensionalize) a parameter in order to provide for good contrast.  On the other hand, normalization can also mask desired behavior, and can be difficult to articulate when writing requirements.

(b) Good parameters have a high ratio of signal to noise (SNR).  That is, the value tested or calculated contains relatively little random variation due to extraneous, uncontrolled contributors.  Noise can arise due to (at least) two major sources: i) sensitivity to circumstances (including tolerances) and ii) propagation of uncertainty.  Both sources can obtain in either test or analytical methodologies.  Sometimes, the SNR will degrade due to the details of the (eventual) design, but that cannot always be accounted for when deriving requirements17.

Attending to the concept of contrast in the parameters mitigates the potential for the infamous “small difference in small numbers” problem and, therefore, tends toward better stability in the project-specific development process18.

Techniques

 The descriptive mode relies almost entirely on analysis techniques19, although those techniques might be supported by test, demonstration, and inspection.  Execution of the prescriptive mode may also employ synthesis20, while determining what influence we want to effect on operational circumstances.

At least three distinct methodologies for discovering parameters are available, depending on the nature of the inputs:

(a) When the inputs take the form of equations, parameters can typically be extracted from each equation term.  This analysis is by far the most straight-forward methodology.

(b) The parameters for tabulations can usually be taken directly from either the column headings or row headings.  A real-world analysis example has been provided here.

(c) Extracting parameters from prose21 can be much more difficult than either of the other two techniques, requiring significant technical synthesis.  Real-world examples are provide here and here.  A couple of subordinate Qualitative Requirements sub-cases are discussed here and here.

Structure

Based on the foregoing, a minimum set of data elements can be defined (see Table 1).  It will probably be noted that the concept of some underlying “technical property” is not addressed in Table 1.  This may seem odd, but it allows the present concept to evolve as it is brought into focus when parameter sets are considered.  Concrete identification of properties (whether physical or otherwise) is deferred to the immediately subsequent discussion of that topic, at which point Table 1 will completed.

Table 1: Parameter Data Elements (partial)
Element Name Description
*Topic Pointer to a previously-identified topic22.
Nomenclature A noun-phrase used to identify the item.
Description Enough words to succinctly indicate the intended concept.
*Source Pointer to the input material leading to definition of the parameter.
Rationale Succinct description of the logic leading to definition.

On to Part II: Sets of Parameters

See also the Related Examples.

Footnotes
  1.   Inventing quantification for mystery topics should be regarded with deep suspicion.[]
  2.   Who might be expert in either operations,  or applicable technology, or both.[]
  3.   But not always.[]
  4.   Or, at least, discoverable.[]
  5.   That is, having finite probability of being feasible (both individually and collectively).[]
  6. Meaningful to the project at hand.[]
  7.  Such parameters are relevant because the superior authority said so, and need no other justification []
  8.   Not, however, using those terms as I define them here.[]
  9.   We never wrote requirements against CPU margin prior to that.[]
  10.   That is, those based on anything other than already-verified, objective facts.[]
  11.   That is, changes that are revolutionary, as opposed to those that are merely evolutionary.[]
  12.   In this context, having a known means of establishing a value.[]
  13. But not ensuring![]
  14.   That is, they can only model what they know about, and they usually omit anything they consider unimportant…and, sometimes, they have a hard time admitting that they might have omitted anything at all (see Cognitive Bias).[]
  15.   Both of which can get kind of exciting, especially when they happen at the same time.[]
  16.   Of appreciable use in discriminating between good values and bad values.[]
  17.   That is, sometimes you do your best and things still don’t work out.[]
  18. Stability means, for example, that the requirements don’t have to change as much as we learn more.[]
  19.   In the classical definition of the term: to identify the elements and describe the relationships between them.[]
  20.   Which, during system development, is usually a prospective process.[]
  21.   Sometimes referred to as “word problems”.[]
  22.   That is, if you haven’t already identified it, you’re doing things backwards.[]