Business Rules and Recommended Practices

CMS takes a methodology-agnostic approach to software engineering. Irrespective of methodology applied, CMS prescribes some required practices. The following business rules are grouped by software development practice. Every practice is compatible with any development life cycle methodology, including Waterfall, Scrum, and Extreme Programming.

The provided rationale for each BR and RP should aid in understanding and tailoring application development within the CMS Processing Environments. Each practice area presents the BRs—the mandatory guidance—first, followed by the RPs. If the TRB promotes a recommended practice to a business rule, it will be listed as the next business rule in the series.

Table 2. CMS Application Development Practice Areas, Business Rules, and Recommended Practices presents the organization of the BRs and RPs for this chapter.

Table 2. CMS Application Development Practice Areas, Business Rules, and Recommended Practices

Practice Area

Business Rules

Recommended Practices

Methodology

BR-ADM-1, BR-ADM-2

N/A

Software Architecture

BR-SA-1 through BR-SA-10, BR-SA-14 through BR-SA-16

RP-SA-11 through RP-SA-13

Software Design

BR-SD-1 through BR-SD-2

RP-SD-3 through RP-SD-8

Software Coding

BR-SC-1 through BR-SC-2

RP-SC-3 through RP-SC-8

Software Quality

BR-SQ-1 through BR-SQ-6

RP-SQ-7 through RP-SQ-9

Secure Software

BR-SS-1 through BR-SS-7

RP-SS-8

Engineering Documentation

BR-ED-1, BR-ED-2

RP-ED-3

System Maintenance

N/A

RP-SM-1 through RP-SM-3

Data and Database Management

BR-DBM-1 through BR-DBM-3

N/A

Software Configuration Management

BR-SCM-1, BR-SCM-2, BR-SCM-4

RP-SCM-3

Defect and Issue Tracking

BR-DIT-1, BR-DIT-2

RP-DIT-3

Software Build and Integration

BR-SBI-1 through BR-SBI-3

RP-SBI-4, RP-SBI-5

Packaging and Delivery

BR-PD-1 through BR-PD-3

RP-PD-4 through RP-PD-6

Deployment

BR-D-1, BR-D-2

RP-D-3 through RP-D-6

Release Management

N/A

RP-RM-01

Application Development Methodology

CMS has adopted the following Application Development Methodology (ADM) business rules.

BR-ADM-1: Use of the CMS Life Cycle Is Mandatory

The CMS Target Life Cycle (TLC) is required of all Information Technology projects, whether new or existing.

Rationale

The TLC is the official life cycle for CMS IT projects, as required by the CIO. It supersedes the XLC.

Related CMS ARS Security Controls include: SA-5 - Information System Documentation.

BR-ADM-2: The Development Methodology and Artifacts Must Be Documented

The CMS TLC does not mandate the use a specific development methodology but requires that the methodology choice and all associated project artifacts to be produced be defined and documented. While the specific artifacts produced and their structure will vary based on the project requirements and the chosen development methodology, they should provide comprehensive, well organized coverage of key topics including:

  • Business Planning
    • Business need, alternatives, development options
    • Program governance
  • Architecture and Design
    • Solution architecture and interface control diagrams. This may be the System Design Document (SDD) as required by CFACTS
    • Relationship between the architecture and associated code components
    • Data archiving and Reporting
    • Test Plans and Reports
  • Software
    • Developed software code, including any configuration files, to support the installation and operations of the information system
  • Operations and Maintenance
    • Operational guide for the solution, including installation, failover and restoration guides

PREFERRED

CMS OIT provides resources that support IT Governance. TLC resources include:

Rationale

Both waterfall and Agile are classes of methodologies, not specific methodologies. Documenting the development methodology makes it possible to set expectations for all parties regarding the process of development and the expected artifacts. It is important to know what to expect specifically from a project team to best ensure the project team addresses all required activities and that no activities are inadvertently overlooked.

Since most methodologies can be tailored, it is important to explicitly describe and share the tailoring approach with all relevant parties. For example, a project team using the XP methodology might document its adoption of test-driven development. Project managers could then expect writing unit test code before functional code. A project team using the SCRUM methodology would be expected to create and manage a product backlog.

Software Architecture

This topic addresses the business rules and recommended practices for software architecture (SA) and network architecture. A number of these items appear originally in some form in either the Foundation Network Services sections. They are cross-referenced and discussed here to elaborate on CMS requirements from the application development point of view.

BR-SA-1: Use CMS Shared Services

It is the software developer’s responsibility to research the available CMS Enterprise Shared Services, such as enterprise shared services and common platform services, as stated in the TRA Foundation Principles topic on Reuse.

At the time of publication, CMS Shared Services include:

  • Identity Management System (IDM)
  • Enterprise Portal
  • Master Data Management (MDM

System developers and maintainers must use applicable shared services. The business and technical rationale for any situations which require deviation from the use of Shared Services should be reviewed with the TRB during a Consult or Design session.

Rationale

Use of shared services reduces both data and code / logic redundancy and centralizes functions within the agency in accordance with the Federal IT Shared Services Strategy.

Shared services also help improve security because typical security issues have already been addressed and the services tested and used by others.

BR-SA-2: Integrate with the CMS Identity Management Services

All CMS Enterprise applications must use a CMS-approved identity management system. Any exception constitutes a security, operational, and architectural risk that should be reviewed by the TRB as part of a Consult or Design session.

Rationale

The IDM shared service provides a single source of identity management within the CMS Enterprise. This shared service reduces user effort in switching between applications. The typical applications include the IDM Lightweight Directory Access Protocol (LDAP), IBM Resource Access Control Facility (RACF), or CMS Active Directory (AD). For specific implementation guidance , please contact the TRB.

BR-SA-3: No Custom Application Code Is Permitted in the Presentation Zone

The CMS Presentation Zone, which houses edge services, supports static content for CMS applications. Under no circumstances will application services in the Presentation Zone write to persistent storage in the Presentation Zone (a) any data submitted by a client or (b) sensitive information passed to the client.

Commercial Off-the-Shelf (COTS) software packages are exempt from this restriction.

Please note that this rule does not include any code needed to support the configuration of edge resources (e.g. API gateway).

Rationale

Static content such as Hypertext Markup Language (HTML) and JavaScript are allowed because they run in the browser, but files such as PHP: Hypertext Preprocessor (PHP) files and Java Server Pages (JSP) files are not permitted because they execute on servers in the Presentation Zone.

A Presentation Zone represents a so-called “Demilitarized Zone” (DMZ) and is Internet facing. Due to its role, the Presentation Zone represents a “less trusted” zone than the Application or Data Zones.

BR-SA-4: Use CMS-Validated Mediation and Data Access Services to Access Data in the Data Zone

Applications must use Data Access Services, integrating mediation principles, rather than access databases directly. The CMS standard is to design and implement data access services to abstract data sources. Mediation principles, implemented within Data Access Services, obfuscate the access requirements to the data source.

Thus, direct access from applications to databases, such as using Java Database Connectivity (JDBC) or Open Database Connectivity (ODBC), is not permitted.

If there is a need to deviate from this rule, the business owner, system owner, ISSO, and application developer are strongly encouraged to discuss the requirements and proposed architecture with the CMS TRB, including compensating controls and any analysis of alternatives. The CMS TRB can provide guidance, but the final decision on any deviation from this rule rests with the business owner, who must accept any associated risk.

Rationale

There are several advantages to using this two-part system between the Application and Data Zones:

  1. Decoupling applications from data stores (location transparency).
  2. Potential scalability advantage, allowing for database “sharding.” (Sharding is horizontal partitioning of an application or database. Typically, horizontal partitioning of database instances will reduce the number of rows in any one instance. Each instance has the same schema but (potentially) different rows.
  3. Ability to perform maintenance on the data stores during outage windows, provided applications are using asynchronous queues and can tolerate a delayed response.
  4. Additional security provided by using a previously tested and trusted service rather than direct database access.
  5. Additional security because database access credentials are present only in the Data Zone.
  6. Additional security because data requests and responses can be validated and inspected.

Disadvantages include:

  1. Few COTS products include messaging support as an option instead of JDBC or ODBC, for example.
  2. Additional cost and complexity from creating data services and obtaining the mediation layer.

Related CMS ARS Security Controls include: CA-3 - Information Exchange, CA-3(6) - Supplemental: Transfer Authorizations.

BR-SA-5: No Long-Term, Persistent Sensitive Application Data Storage in the Presentation or Application Zones

Personally Identifiable Information (PII), Protected Health Information (PHI), or other sensitive data may not be stored in the Application Zone. Other non-sensitive data, such as reference data or caches, may be stored in the Application Zone to improve efficiency.

Public data may be stored indefinitely in the Application and Presentation Zones to provide better performance. In-memory caches are permitted in the Application Zone as a performance enhancement. These caches must be configured to purge expired data. CMS permits storing sensitive, PII, or PHI data temporarily in the Application Zone, but no longer than a maximum storage time of six (6) hours from the time the whole file or record is received for processing or for transfer to the Data Zone. Temporary files and cached data must be removed from the Application Zone once the transfer or processing is confirmed and successful.

This business rule also applies to message queues.

Rationale

In keeping with the defense-in-depth strategy of the CMS TRA Multi-Zone Architecture, production and sensitive data should be persisted in the Data Zone. The TRB established the time limit for this rule.

BR-SA-6: Network Communications Must Meet the TRA Rules for Encryption

Please refer to, TRA Network Services section Security Services chapter for the latest rules.

Related CMS ARS Security Controls include: CA-3 - I information Exchange, CA-3(6) - Supplemental: Transfer Authorizations.

BR-SA-7: Substantive Changes to the Architecture, Products, or Technology of an Existing Application Must Be Documented and Reviewed by the CMS TRB

Substantive changes to an existing application must be reviewed by the TRB to ensure compliance with CMS architectural standards and CMS and federal security standards. A Security Impact Analysis (SIA) must be conducted whenever the architecture, products, or technology of an existing system undergo substantive changes.

Rationale

A substantive change may introduce security vulnerabilities and should be discussed with CMS ISPG and the TRB before committing to a course of action.

Related CMS ARS Security Controls include: RA-3 - Risk Assessment and CM-4 - Impact Analysis.

BR-SA-8: Logging Must Be Configurable and Use Common Platform Standards

Application logging capability must be configurable, allowing system operators to reconfigure the system for file-based logging, database logging, or network (UNIX® Syslog)-based logging.

Programming Language-specific guidance:

  • Java programs must use a logging framework such as Log4J, Apache Commons Logging, and the Java Logging API
  • .NET programs must use Log4Net compatible logging

Applications written in other programming languages should attempt to use Log4J-compatible output logs and configuration files if available.

The preferred logging format is the Common Log Format (CLF), although it is a good practice to coordinate this with the hosting provider and system operator. Note: If CLF is used, the ISO 8601 date requirement is not required because that format uses a different representation.

Rationale

Configurability allows CMS flexibility in deployment.

Use of de facto and common file formats reduces the processing burden for system operations and security.

Related CMS ARS Security Controls include: AU-2 - Event Logging, AU-3 - Content of Audit Records, AU-5 - Response to Audit Logging Processing Failures, AU-7 - Audit Record Reduction and Report Generation, AU-8 - Time Stamps, AU-9 - Protection of Audit Information, AU-10 - Non-Repudiation (High), and AU-11 - Audit Record Retention.

BR-SA-9: Systems Must Define Metrics for IT Health Monitoring

It is critical to provide metrics relevant to the health of the overall system environment, whether cloud or data center based, to measure whether elements are operating as expected and to enable proactive response to emerging problems. Note that the metrics utilized in monitoring for IT performance are similar to health monitoring. Additional information regarding performance monitoring is available in the Infrastructure Services section Application Performance Monitoring chapter for further guidance.

Rationale

To assess the IT health of an application, it is essential to evaluate various metrics. For data center environments, this typically includes metrics relating to CPU, network, storage, and memory usage as well as application and database service statistics. For cloud environments, while some of those same metrics may apply to virtualized systems, the distributed nature of cloud architectures and the use of cloud native services requires considering different metrics. These could include requests per minute, response duration, server/node availability, average compute and storage costs, latency, and others. Equally important is adhering to industry de facto and CMS IT standards for instrumentation and gathering such metrics using CMS’s monitoring infrastructure.

BR-SA-10: Applications in CMS Data Centers May Not Use Some Native Email Protocols

CMS prohibits the use of Messaging Application Programming Interface (MAPI) and Internet Mail Access Protocol (IMAP) protocols by business applications within a CMS data center or cloud enclave. Simple Mail Transport Protocol (SMTP) may be used only to connect to an Enterprise Email as a Service relay.

PREFERRED

CMS recommends the use of CMS Enterprise Email as a Service for outbound mail. CMS Hybrid Cloud maintains SMTP relays that provide TLS-encrypted, authenticated mail connections from all CMS environments. Use of these is required by September 1, 2024.

To send email, applications must use message queuing or a web service to send a message through CMS internal SMTP relay servers. These relay servers send the outbound message via the CMS email services infrastructure (the Microsoft Exchange Web Services (EWS) protocol is permitted because it is Web Service-based).

CMS applications may directly receive email messages only if the email messages were addressed to a “cms.gov” or “hhs.gov” domain. Inbound email must be scanned for malware and checked for appropriate format prior to ingestion by downstream applications.

If there is a need to deviate from this rule, the business owner, system owner, ISSO, and application developer are strongly encouraged to discuss the requirements and proposed architecture with the CMS TRB, including compensating controls and any analysis of alternatives. The CMS TRB can provide guidance, but the final decision on any deviation from this rule rests with the business owner, who must accept any associated risk.

Rationale

Official email from CMS must always have a “.gov” sending address. E-mail with “.gov” sending addresses may exit from a data center only via the hosting contractor’s designated, security-hardened email proxy servers, which must then forward all mail to an HHS trusted email server.

Use of SMTP in production data centers simplifies exfiltration of data by malicious agents or software. Use of SMTP, proxied by message-oriented middleware (message queues), renders this kind of exploit more difficult. It also provides a single point for auditing outbound SMTP traffic.

Projects that intend to use uncommon protocols must receive the latest guidance. Therefore, they must inform the TRB and ISPG team about such protocols during project design consultations.

Related CMS ARS Security Controls include: SC-8 - Transmission Confidentiality and Integrity.

Related National Institute of Standards and Technology (NIST) Special Publication (SP): SP 800-45 Revision 2, Guidelines on Electronic Mail Security.

RP-SA-11: Servers Should Include Instrumentation for Application Performance Monitoring

To provide useful information to business owners in their own terms, applications should provide application performance instrumentation. This instrumentation is responsible for providing metrics in business terms, specific to the given application monitored. Specifically, Java applications must leverage the Java Management Extensions (JMX) standard APIs, and .NET applications must leverage the Microsoft Windows standard APIs.

Rationale

This business rule ensures that applications report information that is of interest at the business level; otherwise, monitoring might only report on such available generic infrastructure metrics that may not be vital to business owners. For example, applications can provide counts of users served, number of simultaneous logins, and other data. These are often the same metrics used in the formulation of Service Level Agreements (SLA).

CMS recommends that operational monitoring tools aggregate performance monitoring information rather than application-specific tools.

RP-SA-12: Minimize Manual File Copying by Using Integrating File Transfer Automation

Minimize manual copying, transfer, and extraction of data files even if it is less frequent (e.g., annual file transfer).

Rationale

Eliminating manual steps improves quality by making processes more repeatable and improves security by limiting human access to critical systems and information while ensuring that secure practices are consistently applied. CMS provides an enterprise file transfer (EFT) shared service that can be used for this purpose.

RP-SA-13: Consider Data Services in the Data Zone to Improve Performance of Database-Intensive Services

Services that require a lot of data manipulation or direct access to databases should be developed as data services in the Data Zone.

Services in the Data Zone should abstract all data sources and repositories being accessed, and responses should only include data needed as part of the response. For more information please see CMS Services Framework (CMS Services Framework) and the CMS multi-zone architecture (CMS Multi Zone Architecture).

Rationale

Data services implemented in the Data Zone have potentially higher performance because they are “closer” to the data and can therefore benefit from reduced data transfer latency and fewer network hops to access databases. Because their responses only contain necessary information, data services improve Application Zone service performance by reducing bandwidth needed to send results.

BR-SA-14: Use of Short Message Service/ Multimedia Message Service by CMS Applications

Short Message Service (SMS) and Multimedia Message Service (MMS) must not be used for sensitive information, nor can a SMS or MMS source identity be trusted. When used by applications, SMS and MMS must be validated in the same way as email, ensuring against malware and use of proper input format.

Rationale

SMS and MMS are not encrypted and the source of messages cannot be verified.

BR-SA-15: Protect Sensitive Information in Transit

All Personally Identifiable Information, Protected Health Information, or other sensitive data entering, exiting, or in transit within the data center (within or across zones) must be encrypted and secured according to the guidance in the CMS ARS. Applications must use Transport Layer Security (TLS) at the highest available level to exchange information securely. When possible, applications should use mutual authentication to ensure the identity of both parties in an information exchange. If encrypting sensitive information is not technically feasible or demonstrably affects the ability to support mission operations, compensatory controls must be implemented as part of a CIO approved risk acceptance plan.

Rationale

CMS ARS SC-8 requires encryption of any transmitted data containing sensitive information to protect against unauthorized snooping of traffic. This control applies to both internal and external networks and all types of information system components from which information can be transmitted.

Related CMS ARS Security Controls include: CA-3 - Information Exchange, CA-3(6) - Supplemental: Transfer Authorizations, CP-9 - CP9(8) - Cryptographic Protection, System Backup, MP-5 - Media Transport, SC-8 - Transmission Confidentiality and Integrity, SC-12 - Cryptographic Key Establishment and Management, SC-13 - Cryptographic Protection, AC-2 - Account Management, AC-3 - Access Enforcement, AC-5 - Separation of Duties, AC-6 - Least Privilege, SI-4 - System Monitoring, SI-5 - Security Alerts, Advisories, and Directives, SI-7 - Software, Firmware, and Information Integrity, SI-10 - Information Input Validation, and AC-21 - Information Sharing.

BR-SA-16: Protect Sensitive Information at Rest

The application must protect the confidentiality and integrity of all sensitive  information (including all PHI or PII), according to the guidance in the CMS ARS. This includes using encryption that meets or exceeds the FIPS 140-2 encryption standard, utilizing an approved FIPS crypto module. The implemented level of encryption must be aligned to the sensitivity of the information. If encrypting sensitive information is not technically feasible or demonstrably affects the ability to support mission operations, compensatory controls must be implemented as part of a CIO approved risk acceptance plan.

Rationale

Encryption protects sensitive information from unauthorized access and disclosure.

Related CMS ARS Security Controls include: MP-4 - Media Storage, SC-12 - Cryptographic Key Establishment and Management, SC-13 - Cryptographic Protection, AC-2 - Account Management, AC-3 - Access Enforcement, AC-5 - Separation of Duties, AC-6 - Least Privilege, SI-4 - System Monitoring, SI-5 - Security Alerts, Advisories, and Directives,SI-7 - Software, Firmware, and Information Integrity, SI-10 - Information Input Validation, SC-1 - Policy and Procedures, and SC-28 - Protection of Information at Rest.

Software Design

CMS has identified the following software design (SD) business rules and recommended practices to guide application development.

BR-SD-1: External Configuration Is Mandatory

All configuration settings related to such components as network connections, ports, date, and Domain Name System (DNS) names must be stored outside the application code and not hardcoded into the application.

Rationale

Inter- or intra-module configuration must be defined external to the application, such as in configuration files or databases, to:

  1. Assure that changes to the configuration can be performed without rebuilding the code.
  2. Allow code testing in different configurations without code modification.
  3. Deploy code in different environments with different hardware or software configurations.
  4. Ensure that hardcoding does not hamper horizontal scalability.

BR-SD-2: Web-Based User Interfaces Must Comply with TRA Guidance

The Web-based User Interface Services topics in this chapter establishes the business rules for designing web-based user interfaces, including mobile web interfaces.

Rationale

Applications requiring web-based interfaces must follow existing guidance. This ensures that users have a consistent user experience across CMS web-based user interfaces.

Related CMS ARS Security Controls include: AC-19 - Access Control for Mobile Devices.

RP-SD-3: Configurations Should Be Validated and Checked on Each System Startup

When possible, the system operator should validate and check configurations at each system restart. CMS recognizes that some technologies (such as Spring dependency injection) make this difficult to enforce.

PREFERRED

CMS testing tools SonarQube and Snyk (“sneak”) evaluate code against different languages and standards.

Rationale

Validating configurations at startup prevents situations where an invalid setting could cause an abnormal system termination. These problems are avoidable by checking such configurations early in the startup and warning operators of configuration issues.

RP-SD-4: Consider Dependency Injection to Achieve External Configuration

Dependency injection, particularly for Microsoft .NET and Oracle Java-based applications, is recommended practice in industry as a method for easier application configuration and testing. Numerous Open Source and proprietary frameworks are available to make this an effective choice in application design.

Rationale

Changing dependencies simplifies software testing by allowing reconfiguration without having to rebuild the software.

RP-SD-5: External System Dependencies Should Be Stubbed Out for Development and Testing

CMS recommends developing stubs to substitute for external system dependencies to increase system isolation during testing and decouple project timelines.

Rationale

Inter-system integration can be difficult to set up because of complexity and availability of resources. Developers can save time by designing a set of stubs that allow development and testing to proceed. This can be as simple as “mock objects” that respond within the application in predefined ways or as sophisticated as stub servers that response to inter-process service requests. See “xUnit Test Patterns: Refactoring Test Code” by Gerard Meszaros, 2007.

Every significant system has external dependencies. Within CMS, external dependencies are addressed via the Interface Control Documents (ICD) that describe the interfaces between a service consumer and provider. Given the ICD, it is possible to build a stub service as a substitute for the full system.

This practice can help with:

  1. Concurrent releases where the existing test system may not be as current as the stub.
  2. System availability in test, where a fully functional test system might not be available. By substituting the stub, development and testing can proceed.
  3. Faster testing cycles (and therefore performance) because a stub may be faster than the fully functional service.
  4. Simulation because the stub can be made to respond artificially slower or with specific data, thus allowing testing to occur in an environment of artificial scarcity.

Note: Integration testing typically would not use such stubs because they are contrary to the purpose of integration testing.

RP-SD-6: Timestamps Logged by the System Must Be in UTC or GMT and Should Be Expressed in ISO-8601 Format

Unless there is an overriding issue, developers shall use the ISO standard 8601 format for representing date and time stamps in all logs. The CMS ARS requires audit records that can be mapped to Universal Time Code (UTC) or Greenwich Mean Time (GMT) and accurate within thirty (30) seconds.

Rationale

This practice makes dates easier to parse and reduces burden on log-scanning software.

Related CMS ARS Security Controls include: AU-8 - Time Stamps.

RP-SD-7: Software Should Be Designed Based on SOA Principles

Service should be autonomous and provide a complete unit of work. Multiple services can be assembled or composed into one service if need be.

The Web Services SOA Service Design Principles topic in this chapter provides additional information on how to structure and develop services for CMS using SOA principles.

Rationale

Use of SOA principles encourages software and data reuse, consistent with CMS IT strategic objectives.

RP-SD-8: Consider Non-Blocking Service Implementations to Improve Performance and Scalability

The use of non-blocking service implementation technologies increases the scalability of services. Developers should consider the use of such designs, particularly if the services are simple and require high scalability. A response should always be provided back to the requestor.

Rationale

Event-driven, non-blocking services reduce or eliminate the use of synchronization operations (such as semaphores and mutexes) in application code. As a result, they do not require the use of threads and have correspondingly lower memory utilization and higher scalability. CMS recommends using threads only when true CPU concurrency is needed. See “Why threads are a bad idea (for most purposes)”, John Ousterhout, Stanford U.

Software Coding

CMS does not mandate the use of specific programming languages. Irrespective of programming language, certain software coding (SC) best practices apply.

BR-SC-1: Inventory all Open Source Software Licenses

Every piece of open source software incorporated into the production release must be inventoried and the specific license documented. As new software is added or old software retired, the inventory must be kept up to date. The Open Source Software section also mandates this business rule.

Rationale

Determining the currency and status of software licenses can be costly and time consuming. In addition, keeping OSS patched is important and requires knowledge of what OSS is in use.

The inventory list can be as simple as a text file that is stored along with the source code in the Version Control System (VCS) repository.

Related CMS ARS Security Controls include: SA-6.

BR-SC-2: All Custom-Written Source Code for a Project Must Conform to an Identified Coding Standard

A project must adopt a coding standard, document it, and adhere to it, as validated during code review or inspection.

CMS does not supply a specific coding standard. Industry offers many available options. Table 3 presents some commonly accepted coding standards that a project may adopt or adapt. The table below shows commonly adopted coding standards.

Table 3. Commonly Adopted Coding Standards
Language Suggested Standard
C / C++ Ellemtel Standard
Java Sun / Oracle Java Coding Standard
JavaScript Google JavaScript style guide
COBOL A.J. Marston’s COBOL Coding Standard
PYTHON Google style guide

It is recommended that projects adopt a tool for source code formatting and establish the coding standards in that tool to maintain consistency.

PREFERRED

CMS strongly recommends the integration of SonarQube and Snyk with development environments to help enforce these standards and ensure code quality.

Rationale

Having and applying a consistent standard makes both maintenance and code review easier.

Externally produced libraries and other forms of reusable capabilities (e.g., Open Source Software) do not have to meet these coding standards because they typically adhere to their own coding standards and are not produced specifically for a given project at CMS.

Note: Non-compliance with coding standards is a Defense Information Systems Agency (DISA) Applications Security and Development Security Technical Implementation Guide (STIG) Category II vulnerability.

RP-SC-3: Do Not Intermingle Code in Different Programming Languages in the Same File

Many programming systems allow commingling of two or more programming languages in the same source file. This can lead, however, to difficulty in static analysis and issues in mixing content with presentation. As a result, CMS recommends against intermingling code in the same source file.

For example, CMS recommends separation of the following programming languages:

  • HTML from JavaScript
  • Java Server Pages from JavaScript
  • Java from SQL
  • HTML from Cascading Style Sheet

In cases where total separation is not possible, such as JSP, CMS advises adherence to the standard by striving for minimal inclusion of Java code in JSP files. A reference from the primary language to the secondary language is an example of minimal inclusion. Table 4 provides an example of file-type separation.

Table 4. File Type Separation Example

Correct Usage

Incorrect Usage


<html>
    <body>
        <!--this is a reference to javascript -->
        <script src="/js/myscript.js"></script>
    </body>
</html>
                                

<html>
    <body>
        <script >
         // this is inline JavaScript
        alert("This is inline JS");
        </script>
    </body>
</html>                        

Embedded SQL (or ESQL) is one notable but also relatively rare exception. Like JSP, ESQL intentionally combines a host language, such as COBOL, C, or Java, with SQL to generate a new source file consisting only of the host language and database library calls to implement the ESQL logic.

Rationale

To facilitate use of static analysis tools, profilers, and other source code scanning tools, code must not be intermingled. This also helps enforce other good practices like separating presentation from business logic.

RP-SC-2: Capture Code Metrics and Defect Tracking Metrics for Quality Improvement Purposes

Regular capture of such code metrics as size and complexity in the automated build process, as well as capture of defect quality metrics from the defect tracking system, allows for trending analysis and eventual quality improvement practices.

RP-SC-5: When Using Flat Files for Data Transfer, Include Helpful Metadata in the File

It is often helpful to embed such metadata as schema version, record count, version of the application that created the file, or other such information, directly into flat files prior to data transfer. This allows the file receiver to perform basic validation before using the file. If the metadata does not match expectations, the system operator should be notified and the notification logged.

One-time file transfers are exempted from this recommendation.

Rationale

Flat file formats (schemas) often change over the lifetime of the file. By embedding metadata to the file, it becomes easier to determine what is needed to read and process the file correctly.

Rather than developing their own methods, developers should consider adopting the Sem-Ver (semantic versioning) V2.0.0 proposal, which was designed to address these kinds of issues.

CMS grants an exemption for one-time file transfers because of the potential implementation cost.

RP-SC-6: When Using Flat Files for Data Transfer, Consider Including a Machine-Readable Schema

Sufficient metadata information should accompany each flat file to allow reading and validating the file. The typical limitation on “sufficient” metadata is the name of each field in the file, but may also include data types and lengths as well as field separators, if used.

CMS does not prescribe the machine-readable schema, which can take many forms. Table 5. Common Flat File Transfer Formats presents some common flat file transfer formats and the recommendations for their use.

Table 5. Common Flat File Transfer Formats

Format

Recommendation

CSV

The first row of the file should contain a comma-separated list of field names.

JSON

Each Java Script Object Notation field has a field name and a value.

XML

The file should be accompanied by an Extensible Markup Language Schema Definition. Alternatively, the data file may reference an XML Schema Definition (XSD) using the standard XML mechanisms. Some projects may choose to use Schematron to perform cross-schema validation.

Rationale

Application of this recommendation makes it easier to read the file and effectively includes the ICD definitions with every file. It also makes the system more robust to changes in file formats because an application can detect and choose to stop gracefully or continue. For example, if a file were in an unexpected format, a graceful stop would provide a clear message to that effect to operators. Likewise, it could continue by ignoring unexpected data (if appropriate) and logging a warning. Either alternative is preferable to data corruption or system failure due to unexpected data. Irrespective of the representation, it is easier to exchange and maintain self-documenting data formats.

RP-SC-7: Use Decimal Math Types for Financial Calculations

Unless developers take extreme care when using floating point or integer math data types, it is generally preferable to use decimal math data types, such as BigDecimal in Java.

Rationale

This practice reduces the likelihood of unintended round-off errors in financial calculations.

RP-SC-8: Consider Synthetic Transactions

Synthetic transactions are transactions to verify correct integration of a newly installed system with dependent services. These transactions also monitor “heartbeats” as well as throughput in production systems.

Synthetic transactions present at least two known perils: (1) to prevent misuse, it is crucial to design these transactions securely, and (2) they can skew monitoring statistics if issued in sufficient quantities.

CMS recommends disabling synthetic transactions during installation (default mode). During production, synthetic transactions must be conducted securely and should be configured to execute with a frequency that minimizes application load while still providing meaningful business reporting of application functionality.

The use of synthetic transactions in production must be approved by the application’s business owner.

Rationale

Using synthetic transactions for “smoke testing” a new deployment can verify that everything is in working order because these transactions will fully exercise the system.

Software Quality

The CMS Testing Framework is the source of all definitions of testing and test procedures such as unit tests, integration tests, and smoke tests.

CMS has adopted the following SQ assurance business rules and recommended practices for the CMS Processing Environments.

Testing custom software applications may require approaches such as static analysis, dynamic analysis, binary analysis, or a hybrid of the three approaches. Developers can employ these analysis approaches in a variety of tools (e.g., web-based application scanners, static analysis tools, and binary analyzers) and in source code reviews.

BR-SQ-1: All Custom-Written Software Must Have Associated Automated Unit Tests

Writing automated unit tests is an industry-accepted best practice, regardless of programming language used.

Rationale

Automated unit testing frameworks are now available in most if not all commonly used programming languages. Consequently, software developers can express unit tests directly as programs or modules as appropriate. Such unit tests become an executable specification of a requirement that can be verified by inspection and by discussion with business subject matter experts (SME). The use of automated unit tests also encourages writing software that is modular and easy to test. This increases cohesion and decreases coupling, which are good goals for software and systems.

BR-SQ-2: Run Automated Unit Tests during Full Builds

CMS requires the execution of all automated unit tests during full builds of the entire source base. A full build does not use results from prior builds. It starts from a clean build area and results in deployable packages, test results, and in-line documentation.

Rationale

Running tests during full builds is mandatory. Testing during full builds is typically done before a full release of the code; it is the last time a full set of unit tests can be run before release. This is a minimum requirement. Projects are encouraged to run unit tests for all builds because unit testing shortens the feedback loop between changes.

BR-SQ-3: Automated Unit Tests Must Use a Commercially Available Unit Testing Framework or Test Runner

Automated unit tests must use a commercially available unit testing framework or test runner that can be run from the command line and that can be executed from within a continuous integration server.

Rationale

Industry has settled on two basic frameworks for unit testing:

There are xUnit frameworks available for nearly every programming language. Consequently, there is little reason to develop and maintain an xUnit framework specific to a project or system. If the programming language (such as COBOL) makes it difficult to use an xUnit-style framework, CMS will consider a proposed alternative, such as TAP. Choosing a protocol such as TAP makes it possible to provide easy-to-read and easily understood reports on test results.

BR-SQ-4: All CMS User Interfaces Must Meet Section 508 Accessibility Requirements

This business rule applies to web-based user interfaces but is not exclusive to web uses. The Web-based User Interfaces topic prescribes the applicable business rules.

Rationale

CMS and HHS enforce compliance with Section 508 of the Rehabilitation Act.

BR-SQ-5: Manual Code and Design Reviews Are Mandatory

Someone other than the original author (or change author) must manually inspect all code and designs, and document the review results. Note: These code and design reviews are distinct from any TRB Consult or Design review sessions.

In addition to performing, recording, and documenting code and design reviews, the design team must provide notice of code reviews to CMS and must invite, at the business owner’s discretion, CMS auditors to attend the reviews.

Rationale

Manual code and design reviews are very effective tools for raising software quality and identifying security weaknesses. Inviting CMS auditors supports the review process by providing the government an opportunity to assess the effectiveness of the reviews. Business owners have the discretion of foregoing such audits because of time commitment and potential scheduling conflicts.

CMS recommends considering automation in the form of code and design review tools.

BR-SQ-6: De-Identification of Production Data Is Required in Non-Production Environments

CMS requires using de-identification (Guide to Protecting the Confidentiality of Personally Identifiable Information (PII), April 2010) and data masking tools when the quality activities in non-production (sometimes called “lower environments” in CMS vernacular) use test data that originated as production data.

The Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires de-identifying medical data. HIPAA specifies the list of Protected Health Information, 45 C.F.R. § 160.103.

Federal Tax Information (FTI) data is similarly subject to de-identification.

Exception:

CMS allows an exception to this rule if the non-production environments are configured and controlled to the same stringent security standard as the production environment and the non-production environment has received an Authorization to Operate (ATO).

Discussion:

Do not assume that fields can be de-identified without considering the context of the information because this could lead to re-identification, for example, by combining information with other sources. A structured analysis is recommended to determine the privacy risk associated with the original data, the requirements for the de-identified data to be useful, and the transformations that will be used to mitigate the privacy risk while preserving the necessary utility.

Rationale

Although it is often helpful to test with realistic data, the federal policies related to the uses and disclosures of PII and PHI make it necessary to de-identify such information before use outside the production environment. As data moves from more trusted to less trusted environments (in reverse of the normal flow), it becomes necessary to perform tasks like de-identification.

RP-SQ-7: Code Coverage Analysis Is Highly Encouraged During Unit Testing

CMS recommends minimally seventy-five (75) percent code (statement) coverage.

Rationale

Code coverage analysis gives an indication of how well unit tests are exercising the code in response to test data. Higher levels of code coverage give correspondingly higher confidence that the code is operating as intended. Note: One hundred percent statement coverage does not mean that every code path has been tested. Complete code path coverage testing is not computationally feasible for large programs.

RP-SQ-8: Use Static Analysis Tools During Build to Catch Common Coding Errors

CMS recommends using static analysis tools to check the bulk of files created by programmers. Good tool candidates include code quality analysis, coding standards, security vulnerability, and Section 508 analyses.

Rationale

Ordinary compilers do not always enforce good practices nor do they discourage poor practices. Static analyzers can be configured to do both. In addition, their speed and consistency help diminish the burden of manual code inspection.

Static analyzers can also point out problem areas. For example, a tool producing the McCabe Cyclomatic Complexity number can identify areas of increased complexity relative to the overall complexity of the application. Higher complexity typically means greater risk of defects. Because these tools can quickly analyze a large amount of source code, they can help prioritize software quality improvement efforts. For more information about static code analysis, Please refer to references from CISA, OWASP and Wikipedia.

The DISA Applications Security and Development STIG recommends static security analysis in conjunction with manual review of the code.

Specific guidance about performing static code analysis within the CMS Cloud environment and integrating static code analysis into the DevOps CI/CD pipeline are available on the CMS Cloud website.

RP-SQ-9: Developers Assist Testers in Generating Test Data

Developers have intimate knowledge of the internals of their system; accordingly, they can assist in generating test data, providing detailed schemas of valid data, or providing data generators to the testing team.

Rationale

Testing teams often do not have the necessary programming skills to perform automated test data generation. Developers can assist by providing needed skills while testers select what and how to test.

Secure Software Practices

The SANS Institute and The MITRE Corporation publish a list of Top 25 Most Dangerous Software Errors. As stated on the Common Weakness Enumeration (CWE™) website, “The CWE site contains data on more than 800 programming errors, design errors, and architecture errors that can lead to exploitable vulnerabilities.” CMS has established the following business rules for secure software (SS) practices.

Related CMS ARS Security Controls (for the following business rules in this topic) include: SA-8 - Security and Privacy Engineering Principles and SI-2 -Flaw Remediation .

BR-SS-1: All Software on CMS Production Servers Must Have Recorded Provenance

All software installed on production servers must come from known and documented sources, and must be auditable. The only allowable sources for software on a CMS production system are CMS-controlled media, build server, or repositories managed by the system operator.

Rationale

Software installed from unknown sources constitutes a project and security risk. The use of a controlled build pipeline and controlled repositories reduces or eliminates this risk by recording every piece of software installed. The appearance of unexpected code could indicate malware infection.

Related CMS ARS Security Controls include: CM-2 - Baseline Configuration and CM-3 - Configuration Change Control.

BR-SS-2: Use NIST SP 800-132-Specified Salted Hashes to Store Passwords

When passwords must be stored in a CMS database, they must be stored only as a salted cryptographic digest (sometimes called a hash), computed by a security function approved for applicability with Federal Information Processing Standards Publication (FIPS PUB) 140-2, and following key derivation techniques specified in NIST SP 800-132.

A Password-Based Key Derivation Function (PBKDF) that uses the SHA-256 (or better) hashing algorithm and a randomly generated 128-bit salt is recommended for CMS information systems. An iteration count of at least 1,000 is also recommended.

Rationale

Passwords should almost never appear on CMS systems. They should be managed using a CMS-approved identity management solution. This business rule applies in those cases where password management is local to the application.

A salted cryptographic hash using a large number of iterations is the current best practice for storing passwords. This approach increases computational cost associated with each derivation and helps thwart dictionary or brute force attacks.

Passwords encrypted in this manner are not to be decrypted by the CMS system, but rather are compared for equality in their encrypted form.

Related CMS ARS Security Controls include: IA-5 - Authenticator Management.

Related NIST Special Publications: SP 800-118, Guide to Enterprise Password Management (Draft), and SP 800-132, Recommendation for Password-based Key Derivation Part 1: Storage Applications.

BR-SS-3: SQL Code Must Use Binding Variables

Use of SQL binding variables reduces an application’s exposure to SQL injection attacks. This technique is available in most major brands of relational databases although the exact syntax may vary by product.

Rationale

All modern implementations of SQL provide the capability to define binding variables to pass data safely back and forth between SQL statements and program code.

Building Dynamic SQL code via string manipulation may introduce security vulnerabilities, and especially to SQL injection. Simply using stored procedures does not offer sufficient protection. SQL statements that include user-provided data must use binding variables except when prohibited by the language. Only a few SQL statements disallow binding variables (such as DROP TABLE). In these cases, it is important to avoid user-provided data or use a robust input validation system to avoid SQL injection vulnerabilities.

Related CMS ARS Security Controls include: AC-19 - Access Control for Mobile Devices.

BR-SS-4: Check for Common Security Vulnerabilities

Projects must check for common security vulnerabilities in their code using a combination of testing and analysis tools.

CMS relies on The MITRE Corporation Common Weakness Enumeration (CWE™) and the CWE™ Top 25 list of Most Dangerous Software Weaknesses to define the types of vulnerabilities to check for. This includes the following:

  • Cross-site scripting (XSS)
  • SQL injection
  • JavaScript injection

The party responsible for software assurance must scan all custom-written software for vulnerabilities using the CMS ISPG-approved scanning software.

Rationale

Penetration testing alone is insufficient to comply with this business rule; rather, a combination of manual code review, static and dynamic testing, and penetration testing should be leveraged.

Use of a Web Application Firewall (WAF) or other tool does not obviate testing for security vulnerabilities.

CMS ARS requires that the source code be free of known vulnerabilities; system developers must test for weaknesses throughout the development process.

Related CMS ARS Security Controls include: RA-5 - Vulnerability Monitoring and Scanning.

BR-SS-5: Use Static Analysis Tools to Catch Common Security Weaknesses

This business rule supports BR-SS-4. Static analysis tools must be used to check the bulk of files created by programmers for known security weaknesses and vulnerabilities.

Rationale

Although manual inspection can find security weaknesses, static analysis tools, such as HP Fortify, University of Maryland FindBugs, or Open Source Splint, can help detect security problems and reduce the burden of manual inspection. CISA provides a list of Free Cybersecurity Services and Tools. The DISA Applications Security and Development STIG recommends static security analysis in conjunction with manual review of the code.

RP-SS-6: Use Profiling to Perform Dynamic Code Analysis

CMS recommends that project teams perform Dynamic Code Analysis (also called profiling) to investigate the application’s behavior at runtime (unlike static analysis, which only uses source code).

Rationale

Dynamic analysis provides a runtime picture of the application’s behavior, including memory consumption, file and database access, and network usage. This information is helpful in determining application performance and security characteristics, grounded in observation of behavior. Dynamic analysis is also helpful in identifying security vulnerabilities. Projects should consider the HHS AppScan service for this purpose.

BR-SS-7: Error Handling Must Not Reveal Information That Could Lead to an Exploit

In production operation, systems must not produce error messages that reveal information that could be used to maliciously compromise or otherwise exploit the system. Examples include providing internal ID numbers, database metadata, and other such messages.

Rationale

Some web-based systems have default, development-mode configurations that reveal information about the processing to help developers easily identify problems. Unfortunately, these settings also provide information valuable to attackers. As stated in the CMS ARS Security Control S-11, “organizations [must] carefully consider the structure / content of error messages.”

Related CMS ARS Security Controls include: SI-11 - Error Handling.

RP-SS-8: Perform Threat Modeling During the Design Phase to Identify Potential System Threats

CMS recommends that project teams perform Threat Modeling as early as possible in the System Development Life Cycle (SDLC), with updates as needed throughout the SDLC. Ideally during the Design / Requirements phases, or if following an Agile / DevOps approach, during each Sprint planning meeting. This practice promotes early identification and remediation of vulnerabilities, as well as the continuous monitoring of effects from internal or external changes.

Rationale

It is insufficient to reactively respond to discovered software security issues and vulnerabilities. Threat modeling allows ADO Teams to identify security risks and vulnerabilities early in the system development life cycle (SDLC). By analyzing potential threats during the design phase, an ADO Team can address them proactively, reducing the chances of security issues cropping up later. For existing products and projects, a threat model can help validate decisions made around secure design, as well as inform improvements to the secure design of system. Threat Modeling is addressed further in Application Development Principles.

Engineering Documentation

The primary purpose of engineering documentation is to record the information used during software development. It is not end-user documentation. CMS has established the following business rules and recommended practices for engineering documentation (ED).

BR-ED-1: Custom-Written Software Must Include Inline Documentation for Public APIs

A public API (or Web Service) is a constant or variable method, procedure, or function that is accessible from another module or system. Public APIs must be documented. In addition, distributed processing APIs, such as remote procedure calls, REST APIs, Simple Object Access Protocol (SOAP) calls, or other APIs, must be documented.

The software build procedure must generate human-readable documentation that is regularly published to a project internal site or folder for reference by developers and maintainers.

Rationale

The documentation of public APIs must go beyond providing function signatures because function signatures alone do not fully specify operational assumptions. For example, any occurrence of a change to system state (such as updating a database record) is not specified in the function signature. It is therefore important to provide documentation that is trustworthy and accurate. Inline documentation is an industry best practice that has proven itself valuable in documenting such APIs.

Inline documentation is consistent with the use of such tools as JavaDocs, doxygen, and robodoc. Other similar tools are available for all popular programming languages.

BR-ED-2: The CMS TLC Phase Review Artifacts Must Be Produced

Regardless of software development methodology employed, the CMS required project artifacts must be produced to support TLC phase reviews as well as any required TRB design consultations. In addition, business and program teams must be able to provide the documents/artifacts that support their system within 2 business days, upon request, to fulfill review or audit requests from outside agencies such as OIG and GAO.

Related CMS ARS Security Controls include: SA-3 - System Development Life Cycle.

Rationale

Typically, CMS systems have long operational lifespans. It therefore is necessary to have sufficient, accurate documentation to support the initial deployment as well as the full operations and maintenance life cycle. Refer to the CMS TLC website for additional information regarding the artifacts expected within each phase.

RP-ED-3: Engineering Documentation Should Be Versioned Along with Source Code in the Same Repository

All engineering documentation such as design documents and diagrams should be versioned along with source code in the same repository. Such documentation must be stored in source (modifiable) form, not just distributable form (such as Adobe PDF).

Rationale

This versioning approach makes it possible to keep both kinds of information (source code and documentation) up to date and baselined together. Modifiable documentation is necessary to support continuous maintenance.

Note: Some version control systems are incapable of storing binary data. Merging is a difficult activity for complex binary data even if the version control system can store binary data. In these cases, it may be preferable to store such media assets in a media asset management system or web content management system.

System Maintenance

Design for maintenance recognizes that successful systems have long operational lives. As a result, CMS advocates building capabilities for diagnosis, debugging, and health monitoring into systems. CMS has adopted the following recommended practices for system maintenance (SM).

RP-SM-1: Consider Building Self-Diagnosis Capability into Systems

The ability to diagnose system problems rapidly and accurately can be very helpful. CMS recommends that developers consider building in diagnosis tools to perform pre- and post-run validation of the system.

Rationale

Writing a simple script for scanning logs or configuration for certain kinds of easily correctable errors can be a timesaver. Another helpful heuristic is counting errors and halting processing when a specific threshold is exceeded.

RP-SM-2: Consider Designing Maintenance Capability into Systems

Developers should consider the following system maintenance capability requirements adapted from NASA Johnson Space Center’s Man-System Integration Standards, Volume I, Section 12: Design for Maintainability:

  • Physical access, visual access, removal, replacement, and modularity requirements
  • Fault detection and isolation requirements
  • Test point design
  • Maintenance data management system

Developers should consider the following factors:

  • Non-interference of preventive maintenance
  • Flexible, preventive maintenance schedule
  • Reduce training requirement for system operations
  • Reduce skill requirements for system operations
  • Reduce time spent on preventive and corrective maintenance
  • Increase maintenance capabilities (especially corrective) during mission
  • Decrease probability of damage to part, module, product, or data itself

Maintenance capability techniques include:

  • Mistake proofing (to ensure that a part or module can only be installed correctly)
  • Self-diagnostic indicators, gauges, annunciators, pop-up dialogs, and dashboard indicators
  • No or minimal adjustment (self-adjustment as well)
  • Tracking metrics over time to identify problem areas

Lack of access to production servers by developers can hamper efforts to determine root cause and perform troubleshooting. CMS recommends that software designs include such troubleshooting capabilities as logs, variable dumps, execution traces, or other techniques.

Note: Debugging data may include PHI or PII and must be secured.

Data and Database Management

Data and database management (DBM) are critical parts of CMS systems. Flexibility in data and data management can improve the efficiency of quality assurance activities by allowing for more and varied access to alternate data sources. For example, quickly switching between databases can make it possible to prepare and use test data.

Software developers must adhere to the following business rules to ensure CMS systems are more robust in the face of change. In addition, CMS requires compliance with specific Data and Database Management Standards (please refer to BR -DBM-3).

BR-DBM-1: Systems Must Meet Federal Record Management Requirements

CMS mandates compliance with National Archives and Records Administration (NARA) requirements for federal government records management.

Rationale

Federal Record Management Requirements are complex and may have design and operational impact. For more information, please refer to National Archives and Records Administration (NARA).

BR-DBM-2: Systems Must Meet Federal Government FOIA Requirements

CMS directs compliance with the Freedom of Information Act (FOIA).

Rationale

To meet FOIA requirements while reducing the burden on the agency, it may be necessary to consider data tagging or other techniques to flag data in databases for eligibility or ineligibility for FOIA. For specific FOIA requirements, please refer to FOIA.

BR-DBM-3: Systems Must Meet CMS Data and Database Management Standards

CMS has published the following TRA chapters that establish the Agency’s guidance and standards on data and database management:

Data Architecture

For such information as data design patterns, standard terms, naming and definition standards, modeling tools and resources, and to reference the CMS Data Reference Model (DRM), a data taxonomy that describes data and subject areas fundamental to achieving CMS’s mission, please refer to the following publications from CMS’s Data Architecture (DA) team:

The documents on the DA page are kept up to date through continuous improvements and new guidelines are added. Feedback on DA publications can be shared through the DA mailbox:

Standards and Guidelines Documents

The Data Architecture team develops various standards and guideline documents around data design, architecture and technology which include, but are not limited to the following topics: Data Naming, Data Definitions, Data Domains, Data Assets, Data Dictionary, Business Glossary, Data Catalog, and data modeling tools. DA’s standards and guidelines are created in part by following industry standards like ISO 1179, Data Catalog Vocabulary (DCAT), and Dublin Core.

CMS Standard Terms

The DA team maintains a growing list of data terms that are used by CMS project teams, the CMS Standard Terms List (STL). The list is made available for download and kept up to date on the DA page. Data names in Logical Data Models should be composed of one or more terms in the STL and adhere to its naming conventions. Projects that use this set of terminology for their data names also provide a common understanding to the rest of the CMS data community making it easier for the enterprise as a whole to function seamlessly. Project teams may request a change or addition to the list by filling out the Standard Term Request Form, available on the DA page, and submitting it to the DA Mailbox.

DA Consultations and Data Model Reviews

To check if a database design meets CMS standards, the project team should contact the DA team within the Division of Enterprise Architecture (DEA) at the DA Mailbox to validate their data artifacts including logical data models, data dictionaries, or other artifacts. Project teams may also set up consultations with the DA team for general data design and architecture advice as well as potential solution engagements to explore new data management technologies. To submit a request for consultation, please reference instructions on the DA page.

Rationale.

CMS’s standards aim to promote better interoperability among applications by ensuring data is clearly, simply and consistently described (named and defined) for consumers.

Database Administration

For information on the roles and responsibilities of a DBA at CMS and for guidelines on commonly used database platforms such as SQL Server, Oracle, and DB2 refer to the following publication from the Database administration:

Rationale.

CMS keeps separate roles and responsibilities for Central DBAs and Local DBAs. The Central DBA will have final approval for all database objects running on all database servers. The Local DBA will refer to the day-to-day operational support person responsible for activities necessary to implement and maintain the database for a project.

Software Configuration Management

CMS requires software configuration management (SCM) in accordance with CMS Risk Management Handbook (RMH) Chapter 5: Configuration Management. The Configuration Management chapter provides additional details.

Related CMS ARS Security Controls include: SA-10 - Developer Configuration Management, CM-1 - Configuration Management Policy and Procedures, and CM-2 - Baseline Configuration.

BR-SCM-1: All Source Code Must Be Checked in to Version Control

The source code is a valuable asset of the system and must be maintained via version control. CMS does not specify a single enterprise repository; instead, each project is chartered to maintain its own repository.

Source code includes all build scripts, test scripts, test data generators, packaging specifications, Open Source Software, and any other files used as input to the build and packing process.

Rationale

Version control is necessary from the standpoints of asset management and security.

Open source Software comes without a manufacturer’s warranty. Thus, it is necessary to maintain the original source under version control. This facilitates testing and maintenance activities that often require source code to fully diagnose and correct issues.

Related CMS ARS Security Controls include: SA-10 - Developer Configuration Management and CM-2 - Baseline Configuration.

BR-SCM-2: All Code Must Be Baselined Prior to Release into Implementation, Validation, and ATO(ed) Production Environments

All version control systems have mechanisms for baselining a release of software. Whether this is called baselining, labeling, or tagging, the effect is that a set of files, each at their own revision level, are identified as belonging to a named baseline. CMS does not specify any naming convention for baselines.

The baseline manifest must contain a summary of the errors, change requests (CR), new features, and other changes that differ from the prior release.

Rationale

Identifying a baseline is a key step in configuration management. It is necessary to know what release of code was used to build software in test and production. Baselining is accomplished differently in every version control tool, but the effect is the same—to label or tag a specific level of code.

Related CMS ARS Security Controls include: SA-10 - Developer Configuration Management, CM-2 - Baseline Configuration, and CM-3 - Configuration Change Control.

RP-SCM-3: Apply Database-Oriented Configuration Management Practices

Use database-oriented configuration management practices to ensure that changes to the database schemas are synchronized with the schemas expected by program source code.

Rationale

Without database-oriented configuration management practices, it is easy for database schemas to get out of sync with the source code intended to manipulate them. Databases evolve by applying changes to an existing database, but deployed code evolves by installing new code each time.

BR-SCM-4: Configurations Must Be Checked in to Version Control

Configurations stored in textual form (except for encryption keys and passwords) must be checked in to version control to facilitate system configuration management.

Rationale

Controlling the versions of configurations makes auditing possible and reduces the risk of unexpected configurations.

Related CMS ARS Security Controls include: CM-6 - Configuration Settings and CM-6(1) - Automated Management, Application, and Verification.

Defect and Issue Tracking

To maintain quality, it is essential that projects track defects and change requests in a defect tracking system. The following business rules and recommended practices govern defect and issue tracking (DIT) on CMS projects.

BR-DIT-1: All CMS Software Development Projects Must Use a Defect Tracking System

Each software system at CMS must use a defect tracking system to track and manage defects. System maintainers must select either a COTS package or, in the alternative, program-specific custom solutions. COTS or custom solutions are acceptable if the full defect and issue history (along with all attachments) can be easily exported and transferred to subsequent contractors.

Rationale

The defect history is essential for understanding the evolution of a software system.

Related CMS ARS Security Controls include: SI-2 - Flaw Remediation.

BR-DIT-2: A Defined Defect Classification Standard Is Mandatory

Projects must define and document their classification scheme for defects.

Rationale

Without a common standard, it is not possible to gather the necessary metrics to understand the evolution of the software.

The following four classification standards are available:

In addition, a software system maintainer (organization) may propose an alternative classification standard.

RP-DIT-3: Defects Should Be Correlated to Baselines

Baselines, sometimes called commits or tags, should be tracked in defect reports. To fix a defect, it is necessary to track what was changed.

Rationale

Baselines help release managers record the change history and understand in business terms what is changing.

Software Build and Integration

Software build and integration (SBI) is the process of converting source code into the target representation. In this set of disciplines, code produced by many developers is integrated into a single build and the output is a set of packages for installation. CMS has established the following business rules and recommended practices for SBI.

BR-SBI-1: All Builds Must Occur in Controlled Environments

Build servers must be isolated from external sources of source code such as developers’ computers, with two exceptions: Package or Library servers, and authoritative version control systems.

In addition, CMS requires identification and inventory of the software tools used in generation of code, such as compilers, case tools, etc. At a minimum, this should be documented within the project artifacts. In some environments, this information can also be recorded in configuration files stored under version control.

Build control files, such as Makefiles, Apache Maven POM (Project Object Model), or Apache ANT build.xml files, must be obtained from version control prior to building.

Rationale

All software on a production server must be auditable. Inclusion of code, configuration, or other uncontrolled data in a build constitutes an avoidable security vulnerability.

Related CMS ARS Security Controls include: SI-7 - Software, Firmware, and Information Integrity.

BR-SBI-2: All Production-Deployed Custom Code Must Be Built and Installed from Version-Controlled Source Code

All custom-written sources must come from a version control system.

Rationale

All software on a production server must be auditable.

Installing or modifying code in production without first checking it in to a version control tool constitutes a project risk and security vulnerability.

BR-SBI-3: Production Builds Must Have Zero Compile Errors

Production builds must not have known compile-time errors (excluding warnings or informational notices). Compliance with this business rule does not permit suppressing errors.

Rationale

Compilation errors constitute a project technical risk. Many times, the compiler will not produce an output. In other cases, the compiler will make a best guess. Either outcome represents an avoidable technical risk.

CMS strongly recommends that production code have zero compile-time warnings, and does not suppress warnings through compiler options and flags. Because different compilers have different levels of tolerance to errors, it is not possible to issue a single business rule on the subject.

RP-SBI-4: Use Explicit Library and Build Dependency Management

Packages typically have dependencies on other software packages or libraries to conduct builds. All software library or package dependencies should be documented. The documentation may be a human-readable document, or preferably, a machine parse-able specification. The documentation should be checked into version control.

A corollary to this rule is that production builds must use explicit dependencies and not automatically upgrade to the latest versions of packages unless the project has planned for a full regression test. A system checkout report should be produced to ensure that the inventory of packages is as expected and to flag any combinations of packages that have known (declared) incompatibilities. This is, of course, package system dependent.

Rationale

Library and build dependency management is an essential part of configuration management for builds. Modern build tools such as Apache Ivy (part of Apache ANT), Apache Maven, and Gradle all use package dependency management, which makes it possible to trace the exact versions used to build a baseline release of code.

RP-SBI-5: Consider Instituting Continuous Integration

Continuous integration is the practice of continuously checking out source code, building it to ensure that all parts integrate, and then running automated unit and integration tests.

Rationale

Continuous integration provides an insight into product quality, allowing early identification of issues (particularly subsystem / module integration issues) in the development cycle. Continuous Integration also provides an easy place to introduce quality improvement activities, such as automated regression tests or static analysis.

Packaging and Delivery

Packaging is the process of bundling related object files into an archive suitable for installation into a system. A target release may constitute one or more target packages, each of which may be installed on a potentially different system. The target release will provide the recommended order of installing these packages on the target systems and should also include a copy of release notes.

A source release is the set of source code along with a list of commercial and custom software required to build the software. Source releases should also include a copy of release notes.

Delivery places the packages in a repository for later deployment. The package catalog on each server tracks the software currently deployed on that server. This is the normal operating mode for distributed systems. Mainframes may follow different conventions for managing the installed software assets inventory.

CMS has established the following business rules and recommended practices for packaging and delivery (PD).

BR-PD-1: Software Must Be Packaged for Deployment

A package includes a manifest listing of the following:

  1. All object files, configuration files, and other files necessary for operation along with installation and de-installation instructions either in machine-readable or human-readable form. Machine-readable form is preferred.
  2. Complete inventory of any third-party components with version numbers.
  3. The rest of the package is the software itself along with instructions for installing, updating, or removing the software.

Packaged software must be installable via a single command, in a standard package formats, for each supported target platform and operating system.

Rationale

Packaging software allows system operators to install packages, which also update operating system or language system catalogs for easier configuration management. This helps ensure that systems are operating with the expected software, which reduces both operational and security risks.

Single step installation and removal is key to reducing otherwise error-prone installation procedures.

Related CMS ARS Security Controls include: SI-3 - Malicious Code Protection.

BR-PD-2: Software Target Packaging Must Be in Either the Operating System or Language Platform Native Form

Operating System (OS) native installation packaging includes the following examples as shown in Table 6. Packaging System by Platform (Illustrative not Normative).

Table 6. Packaging System by Platform (Illustrative not Normative)

Operating System

Mechanism

Red Hat Enterprise Linux or CentOS

RPM, which is installed with YUM

SUSE Enterprise Linux

RPM, which is installed with Zypper

Debian Linux

DEB, which is installed with APT

Windows

MSI, which is installed manually or via Microsoft System Center Configuration Manager or equivalent

Oracle / Sun Solaris

Sun Packaging system

IBM Mainframe platform (z/OS)

At CMS, Endevor is used to perform installation using package control.

Installation specifications may include installation and de-installation code. Mobile platforms may have different installers based on manufacturer.

Some language platforms include packaging standards. These are acceptable as a packaging mechanism for CMS. Table 7. Language-Specific Packaging Standards presents recognized language-specific platform packaging standards.

Table 7. Language-Specific Packaging Standards

Programming Language

Packaging Format

Ruby

Gems

Java

JAR, WAR, EAR

Python

PIP

Node.JS

Node Package Manager (NPM)

Microsoft .NET

NuGet Packages

Perl

Perl Libraries (CPAN)

The packaging formats in Table 7 can produce an inventory on demand of the software installed on a system.

Note: Syntactic units that are part of certain programming languages are sometimes called packages (such as Ada, Java, or PL/SQL). These do not constitute “packaging” based on this definition. They provide modularity, but do not specify a binary release package format.

BR-PD-3: Database Changes Must Include Back-Out Scripts

When packaging database scripts will change the Data Definition Language (DDL) or include Data Modification Language (DML), back-out scripts must be provided in the event the script must be rolled back and the database state restored.

Rationale

The capability to restore the database to the pre-release state is an essential risk mitigation technique.

RP-PD-4: The Package Manifest Should Include a List of All Defects Corrected in the Release

Packages should include a list of all defects corrected. There should be a defect number for each defect and a one-line summary or abstract. Any known and uncorrected defects related to a package should be identified in the same manner.

Rationale

This list of defects, which helps operations understand the impact of changes, can be generated automatically from data in the defect tracking system and the version control system.

RP-PD-5: Changes Applied to Databases Should Be Recorded in the Database Itself

It is often necessary to apply changes to a database that alter the schema or data. To enable changes in idempotent fashion, it is helpful to record which changes have been applied. This record prevents double application of a change. It also facilitates auditing changes applied to a database, which is useful during operations. Please refer to The Agile Data (AD) Method for other strategies.

RP-PD-6: Support A/B Testing of User Interfaces

Providing feature flags or other kinds of runtime configuration supports A/B testing and allows different groups of users to experience slightly different versions of a web site. This is particularly useful for large-volume web sites where the user experience is crucial to the success of the program.

Rationale

A/B testing is a technique that allows CMS to offer a better user experience by scientifically testing user reactions to two or more different variants of the web site.

If A/B testing is valuable for an application, certain design changes can be applied to support A/B testing and should be considered at the outset.

Deployment

Deployment is the activity of installing or upgrading an environment with installation packages from a trusted repository. Deployment may require taking a server down or placing it into maintenance mode, when it may be either unavailable or can only proceed with limited availability (for example, read-only mode). CMS has established the following business rules and recommended practices for deployment (D).

BR-D-1: Developers Do Not Have Unsupervised Administrative Access to Production Servers

Developers must not have unsupervised administrative access to production servers. This requirement is enforceable, for example, by having separate operations staff to access production servers or by pairing developers with operations or management staff to supervise access to production servers.

Rationale

For reasons of separation of duties (CMS ARS Security Control AC-5) and Least Privilege (CMS ARS Security Control AC-6), developers typically do not have access to production systems.

Related CMS ARS Security Controls include: AC-2 - Account Management, AC-5 - Separation of Duties, AC-6 - Least Privilege, and CM-5 - Access Restrictions for Change.

BR-D-2: All Installation and Back-Out Scripts Must Have Been Tested in Lower Environments Prior to Use in Production

All installation, upgrade, removal, and back-out scripts must have been tested (as with any custom software) in lower environments (development, validation, or integration) prior to use in production.

Rationale

Any software run in production must have been tested in lower environments in accordance with the Release Management guidance in this volume. Otherwise, the probability is great that the back-out scripts will not work when used in production.

RP-D-3: Support Rolling Deployment

One recommended practice is rolling deployments. In a rolling deployment, some portion of the web site is removed from the load balancer, brought off line, updated, brought online, and reintroduced to the load balancer to repeat the process on a different portion of the web site until the entire site has been migrated to the new code. This technique can be combined with feature flags, a configuration setting that activates a new feature across all new sites or a portion as needed.

Rationale

A rolling deployment allows the performance of system upgrades with reduced or no outage because both the old and new system are available for use during the transition to the new system.

RP-D-4: Use Feature Flags to Gradually Introduce New Features to Users

Feature flags are runtime flags that allow a business capability to be turned on or off at runtime in production. They allow for decoupling the time of deployment from the time of use.

Rationale

Feature flags provide controls for the business to decide which features to expose to users. Flags enable features, as well as easy removal of a change that may cause issues. With sufficient intelligence, a feature flag can also enable A/B testing, allowing exposure of different user groups to different capabilities to gauge the relative difference in adoption or sentiment by the user groups.

RP-D-5: Deployment Should Integrate with Monitoring to Coordinate Outages

Because deployment causes some systems to be taken offline, deployments may trigger alarms in the monitoring system. This is avoidable by informing the monitoring system that the deployment action is part of a deliberate, planned outage.

Rationale

It is preferable to coordinate any planned outages, such as deployment or maintenance activities that could take part of or an entire system offline. Integration with monitoring reduces coordination errors and is more efficient than human coordination techniques, such as telephone calls.

RP-D-6: Support Rollback of Package Installation

Every package designed for installation into production should be capable of rollback. Rollback can entail database changes, such as schema changes.

Rationale

It is sometimes necessary to back-out changes to the production system. If a package changes a schema, it should be possible to undo the change either by following a manual procedure or executing a prepared script. Another alternative used in highly virtualized environments is to provide the capability to revert the entire virtual machine to a prior incarnation; however, this would not account for database schema changes.

RP-D-7: Support Automated Startup, Shutdown, and Maintenance Mode Entry / Exit

Application developers should provide automated startup and shutdown scripts for a delivered application. These scripts should be developed in collaboration with the appropriate monitoring team and be invoked by the tools the data center provides for scheduling services, such as Tivoli Work Scheduler.

In addition, developers should provide scripts for entering and exiting a maintenance mode, if appropriate.

Rationale

Starting and stopping applications should be as straightforward as possible.

Maintenance modes offer a mechanism to allow limited access to application functionality during updates to capability, data backup, or some other limitation that prevents full system access. There should be a visual indication of maintenance mode as well as a flag for non-visual interfaces (if necessary).

Release Management

The purpose of the Release Management (RM) process is to govern and manage the release of software baselines throughout CMS environments in a reliable, efficient way. Given that industry is innovating rapidly in this area and there are many ways to conduct release management, this topic covers just a few recommended practices.

RP-RM-1: Establish and Follow Organizational Standards for Deployment of Custom Software

Use the Agency or organizationally approved release management service for deployment of custom software into each of these environments in accordance with the approved RM process for deployments.

Rationale

For example, in IBM mainframe environments, the Computer Associates’ Endeavor system is used to perform deployments. Deviating from such a standard increases support and training costs, limiting the ability to exchange resources and know-how between systems. Such deviation also increases the risk that different deployment methods might interfere with one another.

RP-RM-2: (Retired after TRA 2018R1): Contractors Must Deliver Certain Configuration Items

RP-RM-3: (Retired after TRA 2018R1): Minimum Acceptance Test Criteria