Introduction to Application Development

Purpose

The intent of this Application Development Guidelines chapter is to establish the core set of business application development guidelines for Centers for Medicare & Medicaid Services (CMS) software developers and maintainers. CMS is confident that consistent adherence to a standard set of development practices that align with industry best practices should produce greater repeatability of process and higher quality in application delivery.

Given the rapid pace of technology evolution, CMS requires a common baseline for software engineering that is technology agnostic but relies on information technology (IT) industry best practices. The focus of this chapter is to disseminate the practices and business rules deemed most beneficial to the production of high-quality, secure software that is compatible with the CMS IT environment.

Scope

This chapter represents the mandatory business rules (BR) and recommended software engineering best practices (RP) that CMS and CMS / Contractor partners should use in the CMS Processing Environments. The guidance and policies stated in this chapter reflect the CMS agreed-upon, industry, and government best practices to support the most viable approach for CMS that meets legislatively mandated security and privacy requirements as well as current technical standards and specifications.

Concepts and Terminology

Role Definitions for Application Development

Application development is a cross-disciplinary function. The production of quality software solutions requires the cooperation of multiple roles. Table 1. Software Engineering Roles presents the application development / software engineering roles and definitions critical to understanding and applying the guidance of this supplement. These roles and definitions are consistent with the Release Management guidance in this section and commonly used across CMS.

Table 1. Software Engineering Roles

Role

Definition

Business Analyst

The party eliciting and documenting requirements in collaboration with the application’s stakeholders.

System Developer (or simply Developer)

The party producing the initial implementation of a software-based system, including such activities as design, coding, unit testing, integration testing, building, and releasing software.

System Maintainer (or simply Maintainer)

The party maintaining an existing implementation of a software-based system, including such activities as producing software patches, tracking defects, producing change packages, and releasing software fixes. The system maintainer is responsible for producing a high-quality product, which includes functional capabilities (such as features) and non-functional requirements (such as performance, scale, and capacity).

System Operator (or simply Operator)

The party operating a software-based system, including such activities as hosting applications, monitoring, allocating storage, performing backups, starting jobs, restoring systems, applying patches, upgrading systems, and maintaining inventory records. At CMS, this is typically a CMS Virtual Data Center (VDC) operator.

Hosting Provider

The party providing the network, computing, and storage resources (physical or virtual) used by the system. Although the role of hosting provider is distinct from the system operator, the same party may perform these roles. At CMS, this is typically a VDC operator, but could also be a CMS-approved Cloud Service Provider (CSP).

Business Owner

The party or parties for whom the system is developed or maintained.

End User

The party or parties who will use the system.

Information System Security Officer (ISSO)

The party in charge of adherence to security practices for the full life cycle of the system from the perspective of the project.

Software Assurance

The party responsible for running security analysis software and interpreting the reports to continually improve the security quality of the software. It is a quality assurance role with a focus on security. As defined by contract, the system maintainer, system operator, or others may perform this role.

Note: In this chapter, the system maintainer is responsible for any role not specifically assigned to another.

Business Rules and Recommended Practices

This chapter presents both business rules and recommended practices. Business rules reflect CMS standards and are mandatory; conformance to recommended practices is optional but highly encouraged. In addition, recommended practices are likely to become future business rules at CMS’s discretion and following in CMS TRA Foundation, Architecture Change Request (ACR) process.

The provided rationale for each business rule and recommended practice offers additional insight into the reason for the rule or practice as well as context for interpreting the rule or practice. The rationale does not constitute normative guidance.

Internal and External Quality

Internal Quality is the quality that is apparent to the software engineers working on the system. It reflects the code, test data, specifications, and all other component artifacts of the system. Internal quality comprises the attributes of performance, maintainability, scalability, ability to operate, security, reliability, and resilience as understood by the engineering and operations staff as well as management.

External Quality is the quality that end users experience and includes end-user perception of system performance. User interfaces, reports, email notifications, and other forms of user-to-system communication are typical ways to observe external quality.

This chapter acknowledges the importance of both forms of quality.

Principles

Methodology Independence

The software engineering industry has produced various methodologies ranging from waterfall, spiral, and most recently Agile methods, such as SCRUM and Extreme Programming (XP). The guidance in this chapter takes no position on the merits or shortfalls of any method; however, it does establish standards for engineering discipline and practices that all methods must follow.

Technology Agnostic

The architecture described in this chapter meets the modern definition of service-oriented architecture (SOA), which is CMS’s standard. Any references to products and technologies are as examples only: this chapter is technology and product agnostic. Therefore, unless specifically mandated, adoption of any product or technology is a decision beyond the scope of this document.

The term “commercially available” means both proprietary and open source software (OSS).

Introducing New Software

Application developers must perform due diligence to select software that is sustainable, supported, and good value for its CMS customer.

System Design Principles for Cloud and Virtualized Environments

This topic emphasizes new design principles that enable the Cloud and other virtualized environments. These principles are essential to support CMS’s transformation of the bulk of its processing environments to more virtualized environments. The following topics describe relevant design approaches, issues, and security guidance applicable to each design principle. Additional information is available in CMS Hybrid Cloud: Cloud Consumption Playbook.

Design for Scale

Software developers must design software that scales to meet business needs.

Related CMS Acceptable Risk Safeguards (ARS) Security Controls include: SA-2 - Allocation of Resources, SC-30, and SC-2 - Separation of System and User Functionality.

Design for Reliability and Resilience

Business requirements for availability determine whether to implement a system using a highly available design. These requirements are documented as part of disaster recovery (DR) planning. In addition, designs should account for rapid recovery in the event of an availability issue.

Related CMS ARS Security Controls include: CP-9 - System Backup, CP-9(8) - Cryptographic Protection.

Design for Loose Coupling of Components

Service-oriented, Application Programming Interface (API)-based architectures encourage loose coupling of components, with benefits that include resilience, scalability, and flexibility.

The counterpart to loose coupling is tight cohesion. Tight cohesion requires that a service do only one thing. Another way to approach this problem is the single responsibility principle (SRP). SRP holds that a service should have only one reason to change. If a service must change for more than one reason, it probably does too many things.

Design for Elasticity

Where architecture scalability addresses the ability to meet business needs, elasticity addresses the necessary automation to quickly respond to demand, automatically, within predefined limits. Architectures should account for the presence of external control and will require some startup time before newly allocated resources are fully available. It is important from a cost perspective to consider elasticity when establishing elasticity controls.

Related CMS ARS Security Controls include: SA-2 - Allocation of Resources.

Design for Rapidly Deploying Environments

Architectures should be rapidly deployable, in automated fashion, to the designated target environments. This ensures a seamless, repeatable process across the entire development and deployment life cycle.

Design for Caching

For static data that does not change often—such as images, video, audio, Portable Document Format (PDF) files, JavaScript (JS), and Cascading Style Sheet (CSS) files, CMS recommends caching mechanisms to keep the data as close to the end user as possible. For example, and by illustration only, these caches may be deployed outside a data center using Content Distribution Networks (CDN), such as Akamai, to be as close as possible to the user. This closeness helps mitigate access latency. It also reduces load on intermediate systems as requests are satisfied further from core systems. However, the caching mechanism must check for changes to avoid fetching expired data. Security access control must also be maintained.

Design for Dynamic Data Near Processing

Distributed data center architecture introduces the risk of Internet latency and the added cost of bandwidth. Keeping dynamic data close to processing elements is a software best practice to reduce network latency and reinforce cost efficiency. Transferring data into and out of a Cloud architecture requires different design principles than transferring data within the Cloud. In some cases, it may be more efficient to transfer a large volume of data into a Cloud infrastructure to take advantage of parallel processing capabilities. Applications that consume data generated in a data center should be deployed locally to take advantage of lower latencies. Design decisions to deploy applications across data centers raise cautionary performance and cost implications.

This design principle emphasizes the cost of inter-data center (or inter-cloud) processing.

Design for Parallelization

Designing hardware and software to carry out calculations in parallel rather than in sequence leads to more efficient and overall faster processing times for operations. Operations on data from request to storage should take advantage of opportunities for parallel manipulation. Processing data collections can also take advantage of parallelization, with incoming requests distributed across multiple nodes for greater efficiency. The choice of parallelization may dictate the use of radically different technology from small-scale or serial processing.

In addition, developers should consider methods and techniques that limit or avoid resource locking, which is a common side effect of poorly performing parallel architecture.

Design for Security

CMS requires implementation of adequate security to protect all elements in the virtualized application. The security design must protect data-in-transit (please refer to BR-SA-15) as well as data-at-rest (please refer to BR-SA-16) according to very specific rules, as summarized in the following paragraphs. Developers must meet or exceed the policy established in the latest published version of the CMS ARS as well as the business rules of the TRA Network Services section Security Services topic.

An important concept in designing applications with security in mind is the use of Threat Modeling (see RP-SS-8). Threat Modeling is a process for capturing, organizing, and analyzing a variety of application and threat information. It enables informed decision-making about application security risks. In addition to producing a model, the process also produces a prioritized list of security improvements to the conception, requirements gathering, design, or implementation of an application.

Threat Modeling works to identify, communicate, and understand threats and mitigations within the context of protecting something of value. Threat Modeling is a structured approach of identifying and prioritizing potential threats to a system, and determining the value that potential mitigations would have in reducing or neutralizing those threats.

The Threat Modeling process is simple to understand and execute for any project team, regardless of existing experience. The use of Threat Modeling:

  • Helps ADO teams to improve the security and compliance of their applications.
  • Provides documents to support and improve compliance in a variety of situations (e.g., internal or external assessments, ATO, impact analyses).
  • Aids penetration testing by providing information about the threats a system could face.

For detailed information on Threat Modeling and how to perform it, see:

Digital Service Delivery and Human Centered Design

CMS systems directly impact constituents. As a result, user-centric design emphasizes ease of use and empathy for users who will be interacting with their government. CMS has always been a leader in digital service delivery, converting paper-based processes to fully digital solutions. This transformation helps reduce costs to the taxpayer while simultaneously increasing reach and impact. CMS embraces practices that help deliver on these principles.