pmapi(3) — Linux manual page

NAME | C SYNOPSIS | DESCRIPTION | PERFORMANCE METRICS - NAMES AND IDENTIFIERS | PMAPI CONTEXT | INSTANCE DOMAINS | THE TYPE OF METRIC VALUES | THE DIMENSIONALITY AND SCALE OF METRIC VALUES | INSTANCE PROFILE | COLLECTION TIME | GENERAL ISSUES OF PMAPI PROGRAMMING STYLE | DIAGNOSTICS | MULTI-THREADED APPLICATIONS | PCP ENVIRONMENT | SEE ALSO | COLOPHON

PMAPI(3)                 Library Functions Manual                PMAPI(3)

NAME         top

       PMAPI - introduction to the Performance Metrics Application
       Programming Interface

C SYNOPSIS         top

       #include <pcp/pmapi.h>

        ... assorted routines ...

       cc ... -lpcp

DESCRIPTION         top

       Within the framework of the Performance Co-Pilot (PCP), client ap‐
       plications are developed using the Performance Metrics Application
       Programming  Interface (PMAPI) that defines a procedural interface
       with services suited to the development  of  applications  with  a
       particular interest in performance metrics.

       This description presents an overview of the PMAPI and the context
       in  which PMAPI applications are run.  The PMAPI is more fully de‐
       scribed in the Performance Co-Pilot Programmer's  Guide,  and  the
       manual pages for the individual PMAPI routines.

PERFORMANCE METRICS - NAMES AND IDENTIFIERS         top

       For a description of the Performance Metrics Name Space (PMNS) and
       associated terms and concepts, see PCPIntro(1).

       Not  all  PMIDs  need be represented in the PMNS of every applica‐
       tion.  For example, an application  which  monitors  disk  traffic
       will  likely  use a name space which references only the PMIDs for
       I/O statistics.

       Applications which use the PMAPI may have independent versions  of
       a  PMNS, constructed from an initialization file when the applica‐
       tion starts; see pmLoadASCIINameSpace(3), pmLoadNameSpace(3),  and
       PMNS(5).

       Internally (below the PMAPI) the implementation of the Performance
       Metrics  Collection  System (PMCS) uses only the PMIDs, and a PMNS
       provides an external mapping from a hierarchic taxonomy  of  names
       to  PMIDs that is convenient in the context of a particular system
       or particular use of the PMAPI.  For the applications  programmer,
       the  routines  pmLookupName(3)  and  pmNameID(3) translate between
       names in a PMNS and PMIDs, and vice versa.  The PMNS may  be  tra‐
       versed     using    pmGetChildren(3)    andpmTraversePMNS.     The
       pmFetchGroup(3) functions combine metric name lookup,  fetch,  and
       conversion operations.

PMAPI CONTEXT         top

       An  application  using the PMAPI may manipulate several concurrent
       contexts, each associated with a source  of  performance  metrics,
       e.g.  pmcd(1)  on  some  host, or a set of archives of performance
       metrics as created by pmlogger(1).

       Contexts are identified by a ``handle'',  a  small  integer  value
       that  is returned when the context is created; see pmNewContext(3)
       and pmDupContext(3).  Some PMAPI  functions  require  an  explicit
       ``handle''  to identify the correct context, but more commonly the
       PMAPI function is executed in the ``current'' context.   The  cur‐
       rent context may be discovered using pmWhichContext(3) and changed
       using pmUseContext(3).

       If  a  PMAPI  context  has not been explicitly established (or the
       previous    current    context    has    been     closed     using
       pmDestroyContext(3)) then the current PMAPI context is undefined.

       In  addition to the source of the performance metrics, the context
       also includes the instance profile and collection time  (both  de‐
       scribed  below)  which  controls how much information is returned,
       and when the information was collected.

INSTANCE DOMAINS         top

       When performance metric values are returned across the PMAPI to  a
       requesting  application,  there  may  be more than one value for a
       particular metric.  Multiple values, or instances,  for  a  single
       metric  are  typically  the result of instrumentation being imple‐
       mented for each instance of a set of similar  components  or  ser‐
       vices  in a system, e.g.  independent counts for each CPU, or each
       process, or each disk, or each system call type, etc.  This multi‐
       plicity of values is not enumerated in the name space but  rather,
       when  performance  metrics  are  delivered  across  the  PMAPI  by
       pmFetch(3), the format of the result accommodates values  for  one
       or more instances, with an instance-value pair encoding the metric
       value for a particular instance.

       The instances are identified by an internal identifier assigned by
       the agent responsible for instantiating the values for the associ‐
       ated  performance  metric.   Each instance identifier has a corre‐
       sponding external instance identifier name (an ASCII string).  The
       routines pmGetInDom(3), pmLookupInDom(3) and pmNameInDom(3) may be
       used to enumerate all instance identifiers, and to  translate  be‐
       tween internal and external instance identifiers.

       All  of the instance identifiers for a particular performance met‐
       ric are collectively known as an instance domain.   Multiple  per‐
       formance metrics may share the same instance domain.

       If  only  one  instance is ever available for a particular perfor‐
       mance  metric,  the  instance  identifier  in  the   result   from
       pmFetch(3) assumes the special value PM_IN_NULL and may be ignored
       by  the  application,  and only one instance-value pair appears in
       the result for that metric.  Under these circumstances, the  asso‐
       ciated instance domain (as returned via pmLookupDesc(3)) is set to
       PM_INDOM_NULL  to  indicate that values for this metric are singu‐
       lar.

       The difficult issue of transient performance  metrics  (e.g.  per-
       filesystem  information,  hot-plug  replaceable  hardware modules,
       etc.) means that repeated requests for the same  PMID  may  return
       different numbers of values, and/or some changes in the particular
       instance identifiers returned.  This means applications need to be
       aware  that  metric instantiation is guaranteed to be valid at the
       time of collection only.  Similar rules apply to the transient se‐
       mantics of the associated metric values.  In general  however,  it
       is expected that the bulk of the performance metrics will have in‐
       stantiation  semantics that are fixed over the execution life-time
       of any PMAPI client.

THE TYPE OF METRIC VALUES         top

       The PMAPI supports a wide range of format and type  encodings  for
       the  values of performance metrics, namely signed and unsigned in‐
       tegers, floating point numbers, 32-bit and 64-bit encodings of all
       of the above, ASCII strings (C-style, NULL byte  terminated),  and
       arbitrary aggregates of binary data.

       The type field in the pmDesc structure returned by pmLookupDesc(3)
       identifies the format and type of the values for a particular per‐
       formance metric within a particular PMAPI context.

       Note that the encoding of values for a particular performance met‐
       ric  may be different for different PMAPI contexts, due to differ‐
       ences in the underlying  implementation  for  different  contexts.
       However  it is expected that the vast majority of performance met‐
       rics will have consistent value encoding across  all  versions  of
       all implementations, and hence across all PMAPI contexts.

       The  PMAPI supports routines to automate the handling of the vari‐
       ous value formats and types,  particularly  for  the  common  case
       where   conversion   to   a   canonical  format  is  desired,  see
       pmExtractValue(3) and pmPrintValue(3).

THE DIMENSIONALITY AND SCALE OF METRIC VALUES         top

       Independent of how the value is encoded, the value for  a  perfor‐
       mance  metric is assumed to be drawn from a set of values that can
       be described in terms of their dimensionality and scale by a  com‐
       pact encoding as follows.  The dimensionality is defined by a pow‐
       er,  or  index,  in each of 3 orthogonal dimensions, namely Space,
       Time and Count (or Events, which are dimensionless).  For  example
       I/O  throughput might be represented as Space/Time, while the run‐
       ning total of system calls is Count, memory  allocation  is  Space
       and  average  service time is Time/Count.  In each dimension there
       are a number of common scale values that may be used to better en‐
       code ranges that might otherwise exhaust the precision of a 32-bit
       value.  This information is encoded in the pmUnits structure which
       is embedded in the pmDesc structure returned from pmLookupDesc(3).

       The routine pmConvScale(3) is provided to convert values  in  con‐
       junction with the pmUnits structures that defines the dimensional‐
       ity and scale of the values for a particular performance metric as
       returned from pmFetch(3), and the desired dimensionality and scale
       of  the value the PMAPI client wishes to manipulate.  Alternative‐
       ly, the pmFetchGroup(3) functions can perform data format and unit
       conversion operations, specified by textual  descriptions  of  de‐
       sired unit / scales.

INSTANCE PROFILE         top

       The  set  of  instances  for  performance  metrics returned from a
       pmFetch(3) call may be filtered or restricted  using  an  instance
       profile.  There is one instance profile for each PMAPI context the
       application  creates,  and  each  instance profile may include in‐
       stances from one or more instance domains.

       The routines pmAddProfile(3) and pmDelProfile(3) may  be  used  to
       dynamically adjust the instance profile.

COLLECTION TIME         top

       For  each  set  of  values  for  performance  metrics returned via
       pmFetch(3) there is an associated  ``timestamp''  that  serves  to
       identify  when  the  performance metric values were collected; for
       metrics being delivered from a real-time source (i.e.  pmcd(1)  on
       some  host)  this would typically be not long before they were ex‐
       ported across the PMAPI, and for metrics being  delivered  from  a
       set  of  archives,  this  would  be the time when the metrics were
       written into the archive.

       There is an issue here of exactly when individual metrics may have
       been collected, especially given their origin in potentially  dif‐
       ferent  Performance  Metric Domains, and variability in the metric
       updating frequency at the lowest level of the  Performance  Metric
       Domain.   The  PMCS  opts for the pragmatic approach, in which the
       PMAPI implementation undertakes to return all of the metrics  with
       values  accurate  as of the timestamp, to the best of our ability.
       The belief is that the inaccuracy this introduces  is  small,  and
       the additional burden of accurate individual timestamping for each
       returned  metric value is neither warranted nor practical (from an
       implementation viewpoint).

       Of course, in the case of  collection  of  metrics  from  multiple
       hosts the PMAPI client must assume the sanity of the timestamps is
       constrained by the extent to which clock synchronization protocols
       are implemented across the network.

       A  PMAPI  application  may call pmSetMode(3) to vary the requested
       collection time, e.g. to rescan performance  metrics  values  from
       the recent past, or to ``fast-forward'' through a set of archives.

GENERAL ISSUES OF PMAPI PROGRAMMING STYLE         top

       Across  the PMAPI, all arguments and results involving a ``list of
       something'' are declared to be arrays with an associated  argument
       or  function value to identify the number of elements in the list.
       This has been done to avoid both the varargs(3) approach and  sen‐
       tinel-terminated lists.

       Where  the  size of a result is known at the time of a call, it is
       the caller's responsibility to allocate (and  possibly  free)  the
       storage,  and  the called function will assume the result argument
       is of an appropriate size.  Where a result is of variable size and
       that size cannot be known in advance (e.g.  for  pmGetChildren(3),
       pmGetInDom(3),   pmNameInDom(3),  pmNameID(3),  pmLookupLabels(3),
       pmLookupText(3) and pmFetch(3)) the PMAPI  implementation  uses  a
       range  of  dynamic  allocation schemes in the called routine, with
       the caller responsible for subsequently releasing the storage when
       no longer required.  In some cases this simply involves  calls  to
       free(3),   but  in  others  (most  notably  for  the  result  from
       pmFetch(3)),   special   routines   (e.g.   pmFreeResult(3)    and
       pmFreeLabelSets(3)) should be used to release the storage.

       As  a  general rule, if the called routine returns an error status
       then no allocation will have been done, and any pointer to a vari‐
       able sized result is undefined.

DIAGNOSTICS         top

       Where error conditions may arise, the functions that comprise  the
       PMAPI  conform  to  a single, simple error notification scheme, as
       follows;

       +  the function returns an integer

       +  values >= 0 indicate no error, and perhaps some positive sta‐
          tus, e.g. the number of things really processed

       +  values < 0 indicate an error, with a global table of error con‐
          ditions and error messages

       The PMAPI routine pmErrStr(3) translates error conditions into er‐
       ror messages.  By convention, the small negative values are as‐
       sumed to be negated versions of the Unix error codes as defined in
       <errno.h> and the strings returned are as per strerror(3).  The
       larger, negative error codes are PMAPI error conditions.

       One error, common to all PMAPI routines that interact with pmcd(1)
       on some host is PM_ERR_IPC, which indicates the communication link
       to pmcd(1) has been lost.

MULTI-THREADED APPLICATIONS         top

       The original design for PCP was based around single-threaded ap‐
       plications, or more strictly applications in which only one thread
       was ever expected to call the PCP libraries.  This restriction has
       been relaxed for libpcp to allow the most common PMAPI routines to
       be safely called from any thread in a multi-threaded application.

       However the following groups of functions and services in libpcp
       are still restricted to being called from a single-thread, and
       this is enforced by returning PM_ERR_THREAD when an attempt to
       call the routines in each group from more than one thread is de‐
       tected.

       1.  Any use of a PM_CONTEXT_LOCAL context, as the DSO PMDAs that
           are called directly from libpcp may not be thread-safe.

PCP ENVIRONMENT         top

       Most environment variables are described in PCPIntro(1).  In addi‐
       tion, environment variables with the prefix PCP_ are used to para‐
       meterize the file and directory names used by PCP.  On each in‐
       stallation, the file /etc/pcp.conf contains the local values for
       these variables.  The $PCP_CONF variable may be used to specify an
       alternative configuration file, as described in pcp.conf(5).  Val‐
       ues for these variables may be obtained programmatically using the
       pmGetConfig(3) function.

SEE ALSO         top

       PCPIntro(1), PCPIntro(3), PMDA(3), PMWEBAPI(3), pmGetConfig(3),
       pcp.conf(5), pcp.env(5) and PMNS(5).

COLOPHON         top

       This page is part of the PCP (Performance Co-Pilot) project.  In‐
       formation about the project can be found at ⟨http://www.pcp.io/⟩.
       If you have a bug report for this manual page, send it to
       pcp@groups.io.  This page was obtained from the project's upstream
       Git repository ⟨https://github.com/performancecopilot/pcp.git⟩ on
       2025-08-11.  (At that time, the date of the most recent commit
       that was found in the repository was 2025-08-11.)  If you discover
       any rendering problems in this HTML version of the page, or you
       believe there is a better or more up-to-date source for the page,
       or you have corrections or improvements to the information in this
       COLOPHON (which is not part of the original manual page), send a
       mail to man-pages@man7.org

Performance Co-Pilot               PCP                           PMAPI(3)

Pages that refer to this page: chkhelp(1)dbpmda(1)htop(1)indomcachectl(1)newhelp(1)pcp(1)pcpcompat(1)pmafm(1)pmclient(1)pmdaib(1)pmdaperfevent(1)pmdaproc(1)pmdasystemd(1)pmdatrace(1)pmdaweblog(1)pmdbg(1)pmdumptext(1)pmerr(1)pmfind(1)pmgenmap(1)pmie(1)pminfo(1)pmlogcheck(1)pmlogcompress(1)pmlogctl(1)pmlogdump(1)pmlogger(1)pmlogrewrite(1)pmlogsummary(1)pmnscomp(1)pmprobe(1)pmproxy(1)pmrep(1)pmseries(1)pmsocks(1)pmstat(1)pmstore(1)pmtrace(1)pmval(1)logimport(3)pcpintro(3)pmaddderived(3)pmaddderivedtext(3)pmaddprofile(3)pmadeltaindom(3)__pmaf(3)pmagetlog(3)pmapi_internal(3)pmaputlog(3)pmarewritedata(3)pmarewritemeta(3)pmasameindom(3)pmasortindom(3)pmatomstr(3)pmatrydeltaindom(3)pmaundeltaindom(3)__pmcheckattribute(3)__pmcleanmapdir(3)__pmconnectlogger(3)__pmcontrollog(3)__pmconverttime(3)pmconvscale(3)pmctime(3)pmda(3)pmdaattribute(3)pmdacache(3)pmdachildren(3)pmdaconnect(3)pmdadaemon(3)pmdadesc(3)pmdadso(3)pmdaeventclient(3)pmdaeventqueue(3)pmdafetch(3)pmdagetoptions(3)pmdahelp(3)pmdainit(3)pmdainstance(3)pmdainterfacemoved(3)pmdalabel(3)pmdamain(3)pmdaname(3)pmdaopenlog(3)pmdapmid(3)pmdaprofile(3)pmdasenderror(3)pmdastore(3)pmdatext(3)pmdelprofile(3)pmderivederrstr(3)pmdestroycontext(3)pmdiscoverservices(3)pmdiscoversetup(3)pmdupcontext(3)__pmequivindom(3)pmerrstr(3)pmeventflagsstr(3)pmextractvalue(3)pmfault(3)pmfetch(3)pmfetcharchive(3)pmfetchgroup(3)pmfreeeventresult(3)pmfreelabelsets(3)pmfreeprofile(3)pmfreeresult(3)pmgetarchiveend(3)pmgetarchivelabel(3)pmgetchildren(3)pmgetchildrenstatus(3)pmgetconfig(3)pmgetcontexthostname(3)pmgetderivedcontrol(3)pmgetindom(3)pmgetindomarchive(3)pmgetoptions(3)pmgetpmnslocation(3)pmgetusername(3)pmgetversion(3)pmhash(3)pmhttpnewclient(3)pmid_helper(3)pmidstr(3)pmindom_helper(3)pmindomstr(3)pmiputresult(3)pmistart(3)pmloadasciinamespace(3)pmloadderivedconfig(3)pmloadnamespace(3)__pmlocalpmda(3)pmlocaltime(3)__pmlogwritemark(3)pmlookupdesc(3)pmlookupindom(3)pmlookupindomarchive(3)pmlookupindomtext(3)__pmlookupipc(3)pmlookuplabels(3)pmlookupname(3)pmlookuptext(3)pmmergelabels(3)__pmmktime(3)pmnameall(3)pmnameid(3)pmnameindom(3)pmnameindomarchive(3)pmnewcontext(3)pmnewcontextzone(3)pmnewzone(3)pmnomem(3)pmnotifyerr(3)__pmnotifythrottle(3)__pmparsectime(3)pmparsehostattrsspec(3)pmparsehostspec(3)pmparseinterval(3)pmparsemetricspec(3)__pmparsetime(3)pmparsetimewindow(3)pmparseunitsstr(3)pmprintdesc(3)pmprintf(3)pmprintlabelsets(3)pmprintvalue(3)pmreconnectcontext(3)pmrecord(3)pmregisterderived(3)pmsearchinfo(3)pmsearchsetup(3)pmsearchtextindom(3)pmsearchtextquery(3)pmsearchtextsuggest(3)pmsemstr(3)pmseriesdescs(3)pmseriesquery(3)pmseriessetup(3)pmsetmode(3)pmsetprocessidentity(3)pmsetprogname(3)pmsortinstances(3)pmspeclocalpmda(3)pmsprintf(3)pmstore(3)pmstrncat(3)pmstrncpy(3)pmtime(3)pmtimespec(3)pmtimeval(3)pmtraversepmns(3)pmtrimnamespace(3)pmtypestr(3)pmunitsstr(3)pmunloadnamespace(3)pmunpackeventrecords(3)pmusecontext(3)pmusezone(3)pmwebapi(3)pmwebtimerregister(3)pmwhichcontext(3)pmwhichzone(3)QMC(3)QmcContext(3)QmcDesc(3)QmcGroup(3)QmcIndom(3)QmcMetric(3)QmcSource(3)labels.conf(5)LOGARCHIVE(5)mmv(5)pcp.conf(5)pcp-dstat(5)pcp.env(5)pmns(5)