Currency Conversion Development Assignment Flow Charts Definition

Chapter 7: Design and Development

Jonathan Valvano and Ramesh Yerraballi

 

In this chapter, we will begin by presenting a general approach to modular design. In specific, we will discuss how to organize software blocks in an effective manner. The ultimate success of an embedded system project depends both on its software and hardware. Computer scientists pride themselves in their ability to develop quality software. Similarly electrical engineers are well-trained in the processes to design both digital and analog electronics. Manufacturers, in an attempt to get designers to use their products, provide application notes for their hardware devices. The main objective of this class is to combine effective design processes together with practical software techniques in order to develop quality embedded systems. As the size and especially the complexity of the software increase, the software development changes from simple "coding" to "software engineering", and the required skills also vary along this spectrum. These software skills include modular design, layered architecture, abstraction, and verification. Real-time embedded systems are usually on the small end of the size scale, but never the less these systems can be quite complex. Therefore, both hardware and software skills are essential for developing embedded systems. Writing good software is an art that must be developed, and cannot be added on at the end of a project. Just like any other discipline (e.g., music, art, science, religion), expertise comes from a combination of study and practice. The watchful eye of a good mentor can be invaluable, so take the risk and show your software to others inviting praise and criticism. Good software combined with average hardware will always outperform average software on good hardware. In this chapter we will introduce some techniques for developing quality software.

 

Learning Objectives:

  • Understand system development process as a life cycle
  • Take Requirements and formulate a problem statement.
  • Learn that an algorithm is a formal way to describe a solution
  • Define an algorithm with pseudo code or visually as a flowchart
  • Translate flowchart to code
  • Test in simulator (Test → Write code → Test → Write code … cycle)
  • Run on real board

                  

                     Video 7.0. Introduction to Embedded System Design

 

7.1. Product Life Cycle

In this section, we will introduce the product development process in general. The basic approach is introduced here, and the details of these concepts will be presented throughout the remaining chapters of the book. As we learn software/hardware development tools and techniques, we can place them into the framework presented in this section. As illustrated in Figure 7.1, the development of a product follows an analysis-design-implementation-testing-deployment cycle. For complex systems with long life-spans, we transverse multiple times around the life cycle. For simple systems, a one-time pass may suffice.

 

Figure 2.3. Product life cycle.

                  

                     Video 7.1. Product Life Cycle and Requirements

During the analysis phase, we discover the requirements and constraints for our proposed system. We can hire consultants and interview potential customers in order to gather this critical information. A requirement is a specific parameter that the system must satisfy. We begin by rewriting the system requirements, which are usually written in general form, into a list of detailed specifications. In general, specifications are detailed parameters describing how the system should work. For example, a requirement may state that the system should fit into a pocket, whereas a specification would give the exact size and weight of the device. For example, suppose we wish to build a motor controller. During the analysis phase, we would determine obvious specifications such as range, stability, accuracy, and response time. There may be less obvious requirements to satisfy, such as weight, size, battery life, product life, ease of operation, display readability, and reliability. Often, improving the performance on one parameter can be achieved only by decreasing the performance of another. This art of compromise defines the tradeoffs an engineer must make when designing a product. A constraint is a limitation, within which the system must operate. The system may be constrained to such factors as cost, safety, compatibility with other products, use of specific electronic and mechanical parts as other devices, interfaces with other instruments and test equipment, and development schedule. The following measures are often considered during the analysis phase of a project:

Safety: The risk to humans or the environment

Accuracy: The difference between the expected truth and the actual parameter

Precision: The number of distinguishable measurements

Resolution: The smallest change that can be reliably detected

Response time: The time between a triggering event and the resulting action

Bandwidth: The amount of information processed per time

Maintainability: The flexibility with which the device can be modified

Testability: The ease with which proper operation of the device can be verified

Compatibility: The conformance of the device to existing standards

Mean time between failure: The reliability of the device, the life of a product

Size and weight: The physical space required by the system

Power: The amount of energy it takes to operate the system

Nonrecurring engineering cost (NRE cost): The one-time cost to design and test

Unit cost: The cost required to manufacture one additional product

Time-to-prototype: The time required to design, build, and test an example system

Time-to-market: The time required to deliver the product to the customer

Human factors: The degree to which our customers like/appreciate the product

 

: What’s the difference between a requirement and a specification?

The following is one possible outline of a Requirements Document. IEEE publishes a number of templates that can be used to define a project (IEEE STD 830-1998). A requirements document states what the system will do. It does not state how the system will do it. The main purpose of a requirements document is to serve as an agreement between you and your clients describing what the system will do. This agreement can become a legally binding contract. Write the document so that it is easy to read and understand by others. It should be unambiguous, complete, verifiable, and modifiable.

1. Overview

  1.1. Objectives: Why are we doing this project? What is the purpose?

  1.2. Process: How will the project be developed?

  1.3. Roles and Responsibilities: Who will do what?  Who are the clients?

  1.4. Interactions with Existing Systems: How will it fit in?

  1.5. Terminology: Define terms used in the document.

  1.6. Security: How will intellectual property be managed?

2. Function Description

  2.1. Functionality: What will the system do precisely?

  2.2. Scope: List the phases and what will be delivered in each phase.

  2.3. Prototypes: How will intermediate progress be demonstrated?

  2.4. Performance: Define the measures and describe how they will be determined.

  2.5. Usability: Describe the interfaces. Be quantitative if possible.

  2.6. Safety: Explain any safety requirements and how they will be measured.

3. Deliverables

  3.1. Reports: How will the system be described?

  3.2. Audits: How will the clients evaluate progress?

  3.3. Outcomes: What are the deliverables? How do we know when it is done?

 

Observation: To build a system without a requirements document means you are never wrong, but never done.

When we begin the design phase, we build a conceptual model of the hardware/software system. It is in this model that we exploit as much abstraction as appropriate. The project is broken into modules or subcomponents. During this phase, we estimate the cost, schedule, and expected performance of the system. At this point we can decide if the project has a high enough potential for profit. A data flow graphis a block diagram of the system, showing the flow of information. Arrows point from source to destination. The rectangles represent hardware components, and the ovals are software modules. We use data flow graphs in the high-level design, because they describe the overall operation of the system while hiding the details of how it works. Issues such as safety (e.g., Isaac Asimov’s first Law of Robotics “A robot may not harm a human being, or, through inaction, allow a human being to come to harm”) and testing (e.g., we need to verify our system is operational) should be addressed during the high-level design. A data flow graph for a simple position measurement system is shown in Figure 7.2. The sensor converts position in an electrical resistance. The analog circuit converts resistance into the 0 to +3V voltage range required by the ADC. The ADC converts analog voltage into a digital sample. The ADC driver, using the ADC and timer hardware, collects samples and calculates voltages. The software converts voltage to position. Voltage and position data are represented as fixed-point numbers within the computer. The position data is passed to the OLED driver creating ASCII strings, which will be sent to the organic light emitting diode (OLED) module.

Figure 7.2. A data flow graph showing how the position signal passes through the system.

A preliminary design includes the overall top-down hierarchical structure, the basic I/O signals, shared data structures, and overall software scheme. At this stage there should be a simple and direct correlation between the hardware/software systems and the conceptual model developed in the high-level design. Next, we finish the top-down hierarchical structure and build mock-ups of the mechanical parts (connectors, chassis, cables etc.) and user software interface. Sophisticated 3-D CAD systems can create realistic images of our system. Detailed hardware designs must include mechanical drawings. It is a good idea to have a second source, which is an alternative supplier that can sell our parts if the first source can’t deliver on time. Call graphs are a graphical way to define how the software/hardware modules interconnect. Data structures, which will be presented throughout the class, include both the organization of information and mechanisms to access the data. Again safety and testing should be addressed during this low-level design.

A call graph for a simple position measurement system is shown in Figure 7.3. Again, rectangles represent hardware components, and ovals show software modules. An arrow points from the calling routine to the module it calls. The I/O ports are organized into groups and placed at the bottom of the graph. A high-level call graph, like the one shown in Figure 7.3, shows only the high-level hardware/software modules. A detailed call graph would include each software function and I/O port. Normally, hardware is passive and the software initiates hardware/software communication, but as we will learn in this book, it is possible for the hardware to interrupt the software and cause certain software modules to be run. In this system, the timer hardware will cause the ADC software to collect a sample. The timer interrupt service routine (ISR) gets the next sample from the ADC software, converts it to position, and displays the result by calling the OLED interface software. The double-headed arrow between the ISR and the hardware means the hardware triggers the interrupt and the software accesses the hardware.

Figure 7.3. A call graph for a simple position measurement system.

Observation: If module A calls module B, and B returns data, then a data flow graph will show an arrow from B to A, but a call graph will show an arrow from A to B.

The next phase involves developing animplementation. An advantage of a top-down design is that implementation of subcomponents can occur simultaneously. During the initial iterations of the life cycle, it is quite efficient to implement the hardware/software using simulation. One major advantage of simulation is that it is usually quicker to implement an initial product on a simulator versus constructing a physical device out of actual components. Rapid prototyping is important in the early stages of product development. This allows for more loops around the analysis-design-implementation-testing-deployment cycle, which in turn leads to a more sophisticated product.

Recent software and hardware technological developments have made significant impacts on the software development for embedded microcomputers. The simplest approach is to use a cross-assembler or cross-compiler to convert source code into the machine code for the target system. The machine code can then be loaded into the target machine. Debugging embedded systems with this simple approach is very difficult for two reasons. First, the embedded system lacks the usual keyboard and display that assist us when we debug regular software. Second, the nature of embedded systems involves the complex and real-time interaction between the hardware and software. These real-time interactions make it impossible to test software with the usual single-stepping and print statements.

The next technological advancement that has greatly affected the manner in which embedded systems are developed is simulation. Because of the high cost and long times required to create hardware prototypes, many preliminary feasibility designs are now performed using hardware/software simulations. A simulator is a software application that models the behavior of the hardware/software system. If both the external hardware and software program are simulated together, even though the simulated time is slower than the clock on the wall, the real-time hardware/software interactions can be studied.

During the testing phase, we evaluate the performance of our system. First, we debug the system and validate basic functions. Next, we use careful measurements to optimize performance such as static efficiency (memory requirements), dynamic efficiency (execution speed), accuracy (difference between expected truth and measured), and stability (consistent operation.)  Debugging techniques will be presented at the end of most chapters.

Maintenance is the process of correcting mistakes, adding new features, optimizing for execution speed or program size, porting to new computers or operating systems, and reconfiguring the system to solve a similar problem. No system is static. Customers may change or add requirements or constraints. To be profitable, we probably will wish to tailor each system to the individual needs of each customer. Maintenance is not really a separate phase, but rather involves additional loops around the life cycle.

Figure 7.1 describes top-down design as a cyclic process, beginning with a problem statement and ending up with a solution. With a bottom-up design we begin with solutions and build up to a problem statement. Many innovations begin with an idea, “what if…?” In a bottom-up design, one begins with designing, building, and testing low-level components. The low-level designs can be developed in parallel. Bottom-up design may be inefficient because some subsystems may be designed, built, and tested, but never used. As the design progresses the components are fit together to make the system more and more complex. Only after the system is completely built and tested does one define the overall system specifications. The bottom-up design process allows creative ideas to drive the products a company develops. It also allows one to quickly test the feasibility of an idea. If one fully understands a problem area and the scope of potential solutions, then a top-down design will arrive at an effective solution most quickly. On the other hand, if one doesn’t really understand the problem or the scope of its solutions, a bottom-up approach allows one to start off by learning about the problem.

7.2. Successive Refinement

Throughout the book in general, we discuss how to solve problems on the computer. In this section, we discuss the process of converting a problem statement into an algorithm. Later in the book, we will show how to map algorithms into assembly language. We begin with a set of general specifications, and then create a list of requirements and constraints. The general specifications describe the problem statement in an overview fashion, requirements define the specific things the system must do, and constraints are the specific things the system must not do. These requirements and constraints will guide us as we develop and test our system.

Observation:  Sometimes the specifications are ambiguous, conflicting, or incomplete.

There are two approaches to the situation of ambiguous, conflicting, or incomplete specifications. The best approach is to resolve the issue with your supervisor or customer. The second approach is to make a decision and document the decision.

Performance Tip: If you feel a system specification is wrong, discuss it with your supervisor. We can save a lot of time and money by solving the correct problem in the first place.

Successive refinement, stepwise refinement, and systematic decomposition are three equivalent terms for a technique to convert a problem statement into a software algorithm. We start with a task and decompose the task into a set of simpler subtasks. Then, the subtasks are decomposed into even simpler sub-subtasks. We make progress as long as each subtask is simpler than the task itself. During the task decomposition we must make design decisions as the details of exactly how the task will be performed are put into place. Eventually, a subtask is so simple that it can be converted to software code. We can decompose a task in four ways, as shown in Figure 2.6. The sequence, conditional, and iteration are the three building blocks of structured programming. Because embedded systems often have real-time requirements, they employ a fourth building block called interrupts. We will implement time-critical tasks using interrupts, which are hardware-triggered software functions. Interrupts will be discussed in more detail in Chapters 9, 10, and 11. When we solve problems on the computer, we need to answer these questions:

·       What does being in a state mean?                                   List the parameters of the state

·       What is the starting state of the system?                          Define the initial state

·       What information do we need to collect?                        List the input data

·       What information do we need to generate?                     List the output data

·       How do we move from one state to another?                  Specify actions we could perform

·       What is the desired ending state?                                    Define the ultimate goal

Figure 7.4. We can decompose a task using the building blocks of structured programming.

 

We need to recognize these phrases that translate to four basic building blocks:

·       “do A then do B”                              → sequential

·       “do A and B in either order”             → sequential

·       “if A, then do B”                               → conditional

·       “for each A, do B”                             → iterative

·       “do A until B”                                   → iterative

·       “repeat A over and over forever”       → iterative (condition always true)

·       “on external event do B”                   → interrupt

·       “every t msec do B”                           → interrupt

Example 7.0.Build a digital door lock using seven switches.

Interactive Tool 7.0

The animation below shows how successive refinement is done in designing a solution to this problem. Click on the expand button to generate new flowcharts.

The system has seven binary inputs from the switches and one binary output to the door lock. The state of this system is defined as “door locked” and “door unlocked”. Initially, we want the door to be locked, which we can make happen by turning a solenoid off (make binary output low). If the 7-bit binary pattern on the switches matches a pre-defined keycode, then we want to unlock the door (make binary output high). Because the switches might bounce (flicker on and off) when changed, we will make sure the switches match the pre-defined keycode for at least 1 ms before unlocking the door. We can change states by writing to the output port for the solenoid. Like most embedded systems, there is no ending state. Once the switches no longer match the keycode the door will lock again. The first step in successive refinement is to divide the tasks into those performed once (Initialization), and those tasks repeated over and over (Execute lock), as shown as the left flowchart in the Interactive tool 7.0 As shown in the middle flow chart, we implement if the switches match the key, then unlock. If the switches do not match we will lock the door. To verify the user entered the proper keycode the switches must match, then match again after 1ms. There are two considerations when designing a system: security and safety. Notice that the system will lock the door if power is removed, because power applied to the solenoid will unlock the door. For safety reasons, there should be a mechanical way to unlock the door from the inside in case of emergency.

 

7.3. Quality Design

Embedded system development is similar to other engineering tasks. We can choose to follow well-defined procedures during the development and evaluation phases, or we can meander in a haphazard way and produce code that is hard to test and harder to change. The ultimate goal of the system is to satisfy the stated objectives such as accuracy, stability, and input/output relationships. Nevertheless it is appropriate to separately evaluate the individual components of the system. Therefore in this section, we will evaluate the quality of our software. There are two categories of performance criteria with which we evaluate the “goodness” of our software. Quantitative criteria include dynamic efficiency (speed of execution), static efficiency (memory requirements), and accuracy of the results. Qualitative criteria center on ease of software maintenance. Another qualitative way to evaluate software is ease of understanding. If your software is easy to understand then it will be:

            Easy to debug (fix mistakes)

            Easy to verify (prove correctness)

            Easy to maintain (add features)

 

Common Error: Programmers who sacrifice clarity in favor of execution speed often develop software that runs fast, but is error-prone and difficult to change.

 

Golden Rule of Software Development

Write software for others as you wish they would write for you.

 

In order to evaluate our software quality, we need performance measures. The simplest approaches to this issue are quantitative measurements. Dynamic efficiency is a measure of how fast the program executes. It is measured in seconds or processor bus cycles. Static efficiency is the number of memory bytes required. Since most embedded computer systems have both RAM and ROM, we specify memory requirement in global variables, stack space, fixed constants and program. The global variables plus the stack must fit into the available RAM. Similarly, the fixed constants plus the program must fit into the available ROM. We can also judge our embedded system according to whether or not it satisfies given requirements and constraints, like accuracy, cost, power, size, reliability, and time-table.

Qualitative performance measurements include those parameters to which we cannot assign a direct numerical value. Often in life the most important questions are the easiest to ask, but the hardest to answer. Such is the case with software quality. So therefore we ask the following qualitative questions. Can we prove our software works? Is our software easy to understand? Is our software easy to change? Since there is no single approach to writing the best software, we can only hope to present some techniques that you may wish to integrate into your own software style. In fact, this book devotes considerable effort to the important issue of developing quality software. In particular, we will study self-documented code, abstraction, modularity, and layered software. These issues indeed play a profound effect on the bottom-line financial success of our projects. Although quite real, because there is often not an immediate and direct relationship between a software’s quality and profit, we may be mistakenly tempted to dismiss the importance of quality.

To get a benchmark on how good a programmer you are, take the following two challenges. In the first challenge, find a major piece of software that you have written over 12 months ago, and then see if you can still understand it enough to make minor changes in its behavior. The second challenge is to exchange with a peer a major piece of software that you have both recently written (but not written together), then in the same manner, see if you can make minor changes to each other's software.

Observation: You can tell if you are a good programmer if 1) you can understand your own code 12 months later, and 2) others can make changes to your code.

Good engineers employ well-defined design processes when developing complex systems. When we work within a structured framework, it is easier to prove our system works (verification) and to modify our system in the future (maintenance.) As our software systems become more complex, it becomes increasingly important to employ well-defined software design processes. Throughout this book, a very detailed set of software development rules will be presented. This class focuses on real-time embedded systems written in C, but most of the design processes should apply to other languages as well. At first, it may seem radical to force such a rigid structure to software. We might wonder if creativity will be sacrificed in the process. True creativity is more about good solutions to important problems and not about being sloppy and inconsistent. Because software maintenance is a critical task, the time spent organizing, documenting, and testing during the initial development stages will reap huge dividends throughout the life of the software project.

Observation:The easiest way to debug is to write software without any bugs.

We define clients as programmers who will use our software. A client develops software that will call our functions. We define coworkers as programmers who will debug and upgrade our software. A coworker, possibly ourselves, develops, tests, and modifies our software.

Writing quality software has a lot to do with attitude. We should be embarrassed to ask our coworkers to make changes to our poorly written software. Since so much software development effort involves maintenance, we should create software modules that are easy to change. In other words, we should expect each piece of our code will be read by another engineer in the future, whose job it will be to make changes to our code. We might be tempted to quit a software project once the system is running, but this short time we might save by not organizing, documenting, and testing will be lost many times over in the future when it is time to update the code.

As project managers, we must reward good behavior and punish bad behavior. A company, in an effort to improve the quality of their software products, implemented the following policies.

The employees in the customer relations department receive a bonus for every software bug that they can identify. These bugs are reported to the software developers, who in turn receive a bonus for every bug they fix.

           

: Why did the above policy fail horribly? 

We should demand of ourselves that we deliver bug-free software to our clients. Again, we should be embarrassed when our clients report bugs in our code. We should be mortified when other programmers find bugs in our code. There are a few steps we can take to facilitate this important aspect of software design.

Test it now. When we find a bug, fix it immediately. The longer we put off fixing a mistake the more complicated the system becomes, making it harder to find. Remember that bugs do not go away on their own, but we can make the system so complex that the bugs will manifest themselves in mysterious and obscure ways. For the same reason, we should completely test each module individually, before combining them into a larger system. We should not add new features before we are convinced the existing system is bug-free. In this way, we start with a working system, add features, and then debug this system until it is working again. This incremental approach makes it easier to track progress. It allows us to undo bad decisions, because we can always revert back to a previously working system. Adding new features before the old ones are debugged is very risky. With this sloppy approach, we could easily reach the project deadline with 100% of the features implemented, but have a system that doesn’t run. In addition, once a bug is introduced, the longer we wait to remove it, the harder it will be to correct. This is particularly true when the bugs interact with each other. Conversely, with the incremental approach, when the project schedule slips, we can deliver a working system at the deadline that supports some of the features.

Maintenance Tip: Go from working system to working system.

Plan for testing. How to test each module should be considered at the start of a project. In particular, testing should be included as part of the design of both hardware and software components. Our testing and the client's usage go hand in hand. In particular, how we test the module will help the client understand the context and limitations of how our component is to be used. On the other hand, a clear understanding of how the client wishes to use our hardware/software component is critical for both its design and its testing.

Maintenance Tip: It is better to have some parts of the system that run with 100% reliability than to have the entire system with bugs.

 

Get help. Use whatever features are available for organization and debugging. Pay attention to warnings, because they often point to misunderstandings about data or functions. Misunderstanding of assumptions that can cause bugs when the software is upgraded, or reused in a different context than originally conceived. Remember that computer time is a lot cheaper than programmer time.

Maintenance Tip: It is better to have a system that runs slowly than to have one that doesn’t run at all.

Deal with the complexity. In the early days of microcomputer systems, software size could be measured in 100’s of lines of source code using 1000’s of bytes of memory. These early systems, due to their small size, were inherently simple. The explosion of hardware technology (both in speed and size) has led to a similar increase in the size of software systems. Some people forecast that by the next decade, automobiles will have 10 million lines of code in their embedded systems. The only hope for success in a large software system will be to break it into simple modules. In most cases, the complexity of the problem itself cannot be avoided. E.g., there is just no simple way to get to the moon. Nevertheless, a complex system can be created out of simple components. A real creative effort is required to orchestrate simple building blocks into larger modules, which themselves are grouped to create even larger systems. Use your creativity to break a complex problem into simple components, rather than developing complex solutions to simple problems.

Observation:There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies and the other way is make it so complicated that there are no obvious deficiencies. C.A.R. Hoare, "The Emperor's Old Clothes," CACM Feb. 1981.

 


7.4. Functions, Procedures, Methods, and Subroutines

A program module that performs a well-defined task can be packaged up and defined as a single entity. Functions in that module can be invoked whenever a task needs to be performed. Object-oriented high-level languages like C++ and Java define program modules as methods. Functions and procedures are defined in some high-level languages like Pascal, FORTRAN, and Ada. In these languages, functions return a parameter and procedures do not. Most high-level languages however define program modules as functions, whether they return a parameter or not. A subroutine is the assembly language version of a function. Consequently, subroutines may or may not have input or output parameters. Formally, there are two components to a subroutine: definition and invocation. The subroutine definitionspecifies the task to be performed. In other words, it defines what will happen when executed. The syntax for an assembly subroutine begins with a label, which will be the name of the subroutine and ends with a return instruction. The definition of a subroutine includes a formal specification its input parameters and output parameters. In well-written software, the task performed by a subroutine will be well-defined and logically complete. The subroutine invocation is inserted to the software system at places when and where the task should be performed. We define software that invokes the subroutine as “the calling program” because it calls the subroutine. There are three parts to a subroutine invocation: pass input parameters, subroutine call, and accept output parameters. If there are input parameters, the calling program must establish the values for input parameters before it calls the subroutine. A BL instruction is used to call the subroutine. After the subroutine finishes, and if there are output parameters, the calling program accepts the return value(s). In this chapter, we will pass parameters using the registers. If the register contains a value, the parameter is classified as call by value. If the register contains an address, which points to the value, then the parameter is classified as call by reference.

: What is the difference between call by value and call by reference?

For example, consider a subroutine that samples the 12-bit ADC, as drawn in Figure 7.1. An analog input signal is connected to ADC0. The details of how the ADC works will be presented later in the class, but for now we focus on the defining and invoking subroutines. The execution sequence begins with the calling program setting up the input parameters. In this case, the calling program sets Register R0 equal to the channel number, MOV R0,#0. The instruction BL ADC_In will save the return address in the LR register and jump to the ADC_In subroutine. The subroutine performs a well-defined task. In this case, it takes the channel number in Register R0 and performs an analog to digital conversion, placing the digital representation of the analog input into Register R0. The BX LR instruction will move the return address into the PC, returning the execution thread to the instruction after the BL in the calling program. In this case, the output parameter in Register R0 contains the result of the ADC conversion. It is the responsibility of the calling program to accept the return parameter. In this case, it simply stores the result into variable n. In this example, both the input and output parameters are call by value.

Figure 7.1. The calling program invokes the ADC_In subroutine passing parameters in registers.

 

The overall goal of modular programming is to enhance clarity. The smaller the task, the easier it will be to understand. Coupling is defined as the influence one module’s behavior has on another module. In order to make modules more independent we strive to minimize coupling. Obvious and appropriate examples of coupling are the input/output parameters explicitly passed from one module to another. A quantitative measure of coupling is the number of bytes per second (bandwidth) that are transferred from one module to another. On the other hand, information stored in public global variables can be quite difficult to track. In a similar way, shared accesses to I/O ports can also introduce unnecessary complexity. Public global variables cause coupling between modules that complicate the debugging process because now the modules may not be able to be separately tested. On the other hand, we must use global variables to pass information into and out of an interrupt service routine and from one call to an interrupt service routine to the next call. When passing data into or out of an interrupt service routine, we group the functions that access the global into the same module, thereby making the global variable private. Another problem specific to embedded systems is the need for fast execution, coupled with the limited support for local variables. On many microcontrollers it is inefficient to implement local variables on the stack. Consequently, many programmers opt for the less elegant yet faster approach of global variables. Again, if we restrict access to these globals to function in the same module, the global becomes private. It is poor design to pass data between modules through public global variables; it is better to use a well-defined abstract technique like a FIFO queue.

We should assign a logically complete task to each module. The module is logically complete when it can be separated from the rest of the system and placed into another application. The interface design is extremely important. The interface to a module is the set of public functions that can be called and the formats for the input/output parameters of these functions. The interfaces determine the policies of our modules: “What does the module do?” In other words, the interfaces define the set of actions that can be initiated. The interfaces also define the coupling between modules. In general we wish to minimize the bandwidth of data passing between the modules yet maximize the number of modules. Of the following three objectives when dividing a software project into subtasks, it is really only the first one that matters

            • Make the software project easier to understand

            • Increase the number of modules

            • Decrease the interdependency (minimize bandwidth between modules).

 

: List some examples of coupling.

We will illustrate the process of dividing a software task into modules with an abstract but realistic example. The overall goal of the example shown in Figure 7.2 is to sample data using an ADC, perform calculations on the data, and output results. The organic light emitting diode (OLED) could be used to display data to the external world. Notice the typical format of an embedded system in that it has some tasks performed once at the beginning, and it has a long sequence of tasks performed over and over. The structure of this example applies to many embedded systems such as a diagnostic medical instrument, an intruder alarm system, a heating/AC controller, a voice recognition module, automotive emissions controller, or military surveillance system. The left side of Figure 7.2 shows the complex software system defined as a linear sequence of ten steps, where each step represents many lines of assembly code. The linear approach to this program follows closely to linear sequence of the processor as it executes instructions. This linear code, however close to the actual processor, is difficult to understand, hard to debug, and impossible to reuse for other projects. Therefore, we will attempt a modular approach considering the issues of functional abstraction, complexity abstraction, and portability in this example. The modular approach to this problem divides the software into three modules containing seven subroutines. In this example, assume the sequence Step4-Step5-Step6 causes data to be sorted. Notice that this sorting task is executed twice.

Figure 7.2. A complex software system is broken into three modules containing seven subroutines.

Functional abstraction encourages us to create a Sort subroutine allowing us to write the software once, but execute it from different locations. Complexity abstraction encourages us to organize the ten-step software into a main program with multiple modules, where each module has multiple subroutines. For example, assume the assembly instructions in Step1 cause the ADC to be initialized. Even though this code is executed only once, complexity abstraction encourages us to create an ADC_Init subroutine so the system is easier to understand and easier to debug. In a similar way assume Step2 initializes the OLED port, Step3 samples the ADC, the sequence Step7-Step8 performs an average, and Step10 outputs to the OLED. Therefore, each well-defined task is defined as a separate subroutine. The subroutines are then grouped into modules. For example, the ADC module is a collection of subroutines that operate the ADC. The complex behavior of the ADC is now abstracted into two easy to understand tasks: turn it on, and use it. In a similar way, the OLED module includes all functions that access the OLED. Again, at the abstract level of the main program, understanding how to use the OLED is a matter knowing we first turn it on then we transmit data. The math module is a collection of subroutines to perform necessary calculations on the data. In this example, we assume sort and average will be private subroutines, meaning they can be called only by software within the math module and not by software outside the module. Making private subroutines is an example of “information hiding”, separating what the module does from how the module works. When we port a system, it means we take a working system and redesign it with some minor but critical change. The OLED device is used in this system to output results. We might be asked to port this system onto a device that uses an LCD in place of the OLED for its output. In this case, all we need to do is design, implement and test an LCD module with two subroutines LCD_Init and LCD_Out that function in a similar manner as the existing OLED routines. The modular approach performs the exact same ten steps in the exact same order. However, the modular approach is easier to debug, because first we debug each subroutine, then we debug each module, and finally we debug the entire system. The modular approach clearly supports code reuse. For example, if another system needs an ADC, we can simply use the ADC module software without having to debug it again.

Observation: When writing modular code, notice its two-dimensional aspect. Down the y-axis still represents time as the program is executed, but along the x-axis we now visualize a functional block diagram of the system showing its data flow: input, calculate, output.

 

7.5. Making Decisions

The previous section presented fundamental concepts and general approaches to solving problems on the computer. In the subsequent sections, detailed implementations will be presented.

7.5.1. Conditional if-then Statements

Decision making is an important aspect of software programming. Two values are compared and certain blocks of program are executed or skipped depending on the results of the comparison. In assembly language it is important to know the precision (e.g., 8-bit, 16-bit, 32-bit) and the format of the two values (e.g., unsigned, signed). It takes three steps to perform a comparison. You begin by reading the first value into a register. If the second value is not a constant, it must be read into a register, too. The second step is to compare the first value with the second value. You can use either a subtract instruction with the S (SUBS) or a compare instruction (CMP CMN). The CMP CMNSUBS instructions set the condition code bits. The last step is a conditional branch.

Observation: Think of the three steps 1) bring first value into a register, 2) compare to second value, 3) conditional branch, bxx (where xx is eq ne lo ls hi hs gt ge lt or le). The branch will occur if (first is xx second).

In Programs 71 and 7.2, we assume G is a 32-bit unsigned variable. Program 7.1 contains two separate if-then structures involving testing for equal or not equal. It will call GEqual7 if G equals 7, and GNotEqual7 if G does not equal 7. When testing for equal or not equal it doesn’t matter whether the numbers are signed or unsigned. However, it does matter if they are 8-bit or 16-bit. To convert these examples to 16 bits, use the LDRH R0,[R2] instruction instead of the LDR R0,[R2]  instruction. To convert these examples to 8 bits, use the LDRB R0,[R2]instruction instead of the LDR R0,[R2]instruction.

Assembly code

C code

    LDR R2, =G     ; R2 = &G

    LDR R0, [R2]   ; R0 = G

    CMP R0, #7     ; is G == 7 ?

    BNE next1      ; if not, skip

    BL  GEqual7    ; G == 7

next1

unsigned long G;

if(G ==  7){

  GEqual7();

}

    LDR R2, =G     ; R2 = &G

    LDR R0, [R2]   ; R0 = G

    CMP R0, #7     ; is G != 7 ?

    BEQ next2      ; if not, skip

    BL  GNotEqual7 ; G != 7

next2

 

if(G != 7){

  GNotEqual7();

}

Program 7.1. Conditional structures that test for equality (this works with signed and unsigned numbers).

When testing for greater than or less than, it does matter whether the numbers are signed or unsigned. Program 7.2 contains four separate unsigned if-then structures. In each case, the first step is to bring the first value in R0; the second step is to compare the first value with a second value; and the third step is to execute an unsigned branch Bxx. The branch will occur if the first unsigned value is xx the second unsigned value.

Assembly code

C code

    LDR R2, =G      ; R2 = &G

    LDR R0, [R2]    ; R0 = G

    CMP R0, #7      ; is G > 7?

    BLS next1       ; if not, skip

    BL  GGreater7   ; G > 7

next1

unsigned long G;

if(G > 7){

  GGreater7();

}

    LDR R2, =G      ; R2 = &G

    LDR R0, [R2]    ; R0 = G

    CMP R0, #7      ; is G >= 7?

    BLO next2       ; if not, skip

    BL  GGreaterEq7 ; G >= 7

next2

 

if(G >= 7){

  GGreaterEq7();

}

    LDR R2, =G      ; R2 = &G

    LDR R0, [R2]    ; R0 = G

    CMP R0, #7      ; is G < 7?

    BHS next3       ; if not, skip

    BL  GLess7      ; G < 7

next3

 

if(G < 7){

  GLess7();

}

    LDR R2, =G      ; R2 = &G

    LDR R0, [R2]    ; R0 = G

    CMP R0, #7      ; is G <= 7?

    BHI next4       ; if not, skip

    BL  GLessEq7    ; G <= 7

next4

 

if(G <= 7){

  GLessEq7();

}

Program 7.2. Unsigned conditional structures.

It will call GGreater7 if G is greater than 7, GGreaterEq7 if G is greater than or equal to 7, GLess7 if G is less than 7, and GLessEq7 if G is less than or equal to 7. When comparing unsigned values, the instructions BHIBLOBHS and BLS should follow the subtraction or comparison instruction. A conditional if-then is implemented by bringing the first number in a register, subtracting the second number, then using the branch instruction with complementary logic to skip over the body of the if-then. To convert these examples to 16 bits, use the LDRH R0,[R2] instruction instead of the LDR R0,[R2]  instruction. To convert these examples to 8 bits, use the LDRB R0,[R2]  instruction instead of the LDR R0,[R2] instruction.

Interactive Tool 7.1

If-Then Statement - The statements inside an if statement will execute once if the condition is true. If the condition is false, the program will skip those instructions. Choose two unsigned integers as variables a and b, press run and follow the flow of the program and examine the output screen. You can repeat the process with different variables.
variable a: variable b:

C Code
volatile long a;
volatile long b;

int main () {
printf("Starting the Construct ...\n");
if (a<b){
printf("a is less than b\n");
}
printf("Ending the Construct ...\n");
return 0;
}
Output Screen

Example 7.1. Assuming G1 is 8-bit unsigned, write software that sets G1=50 if G1 is greater than 50. In other words, we will force G1 into the range 0 to 50

Solution: First, we draw a flowchart describing the desired algorithm, see Figure 7.3. Next, we restate the conditional as “skip over if G1 is less than or equal to 50”. To implement the assembly code we bring G1 into Register R0 using LDRB to load an unsigned byte, subtract 50, then branch to next if G1 is less than or equal to 50, as presented in Program 7.3. We will use an unsigned conditional branch because the data format is unsigned.

Figure 7.3. Flowchart of an if-then structure.

 

    LDR R2, =G1    ; R2 = &G1

    LDRB R0, [R2]  ; R0 = G1

    CMP R0, #50    ; is G1 > 50?

    BLS next       ; if not, skip to end

    MOV R1, #50    ; R1 = 50

    STRB R1, [R2]  ; G1 = 50

next

unsigned char G1;

if(G1>50){

  G1 = 50;

}

 

"Flow chart" redirects here. For the poem, see Flow Chart (poem). For the music group, see Flowchart (band).

A flowchart is a type of diagram that represents an algorithm, workflow or process, showing the steps as boxes of various kinds, and their order by connecting them with arrows. This diagrammatic representation illustrates a solution model to a given problem. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.[1]

Overview[edit]

Flowcharts are used in designing and documenting simple processes or programs. Like other types of diagrams, they help visualize what is going on and thereby help understand a process, and perhaps also find flaws, bottlenecks, and other less-obvious features within it. There are many different types of flowcharts, and each type has its own repertoire of boxes and notational conventions. The two most common types of boxes in a flowchart are:

  • a processing step, usually called activity, and denoted as a rectangular box
  • a decision, usually denoted as a diamond.

A flowchart is described as "cross-functional" when the page is divided into different swimlanes describing the control of different organizational units. A symbol appearing in a particular "lane" is within the control of that organizational unit. This technique allows the author to locate the responsibility for performing an action or making a decision correctly, showing the responsibility of each organizational unit for different parts of a single process.

Flowcharts depict certain aspects of processes and are usually complemented by other types of diagram. For instance, Kaoru Ishikawa defined the flowchart as one of the seven basic tools of quality control, next to the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, and the scatter diagram. Similarly, in UML, a standard concept-modeling notation used in software development, the activity diagram, which is a type of flowchart, is just one of many different diagram types.

Nassi-Shneiderman diagrams and Drakon-charts are an alternative notation for process flow.

Common alternative names include: flow chart, process flowchart, functional flowchart, process map, process chart, functional process chart, business process model, process model, process flow diagram, work flow diagram, business flow diagram. The terms "flowchart" and "flow chart" are used interchangeably.

The underlying graph structure of a flowchart is a flow graph, which abstracts away node types, their contents and other ancillary information.

History[edit]

The first structured method for documenting process flow, the "flow process chart", was introduced by Frank and Lillian Gilbreth to members of the American Society of Mechanical Engineers (ASME) in 1921 in the presentation "Process Charts: First Steps in Finding the One Best Way to do Work".[2] The Gilbreths' tools quickly found their way into industrial engineering curricula. In the early 1930s, an industrial engineer, Allan H. Mogensen began training business people in the use of some of the tools of industrial engineering at his Work Simplification Conferences in Lake Placid, New York.

A 1944 graduate of Mogensen's class, Art Spinanger, took the tools back to Procter and Gamble where he developed their Deliberate Methods Change Program. Another 1944 graduate, Ben S. Graham, Director of Formcraft Engineering at Standard Register Industrial, adapted the flow process chart to information processing with his development of the multi-flow process chart to display multiple documents and their relationships.[3] In 1947, ASME adopted a symbol set derived from Gilbreth's original work as the "ASME Standard: Operation and Flow Process Charts."[4]

Douglas Hartree in 1949 explained that Herman Goldstine and John von Neumann had developed a flowchart (originally, diagram) to plan computer programs.[5] His contemporary account is endorsed by IBM engineers[6] and by Goldstine's personal recollections.[7] The original programming flowcharts of Goldstine and von Neumann can be seen in their unpublished report, "Planning and coding of problems for an electronic computing instrument, Part II, Volume 1" (1947), which is reproduced in von Neumann's collected works.[8]

Flowcharts became a popular means for describing computer algorithms. The popularity of flowcharts decreased in the 1970s when interactive computer terminals and third-generation programming languages became common tools for computer programming. Algorithms can be expressed much more concisely as source code in such languages. Often pseudo-code is used, which uses the common idioms of such languages without strictly adhering to the details of a particular one.

Nowadays flowcharts are still used for describing computer algorithms.[9] Modern techniques such as UMLactivity diagrams and Drakon-charts can be considered to be extensions of the flowchart.

Types[edit]

Sterneckert (2003) suggested that flowcharts can be modeled from the perspective of different user groups (such as managers, system analysts and clerks) and that there are four general types:[10]

  • Document flowcharts, showing controls over a document-flow through a system
  • Data flowcharts, showing controls over a data-flow in a system
  • System flowcharts, showing controls at a physical or resource level
  • Program flowchart, showing the controls in a program within a system

Notice that every type of flowchart focuses on some kind of control, rather than on the particular flow itself.[10]

However, there are several of these classifications. For example, Andrew Veronis (1978) named three basic types of flowcharts: the system flowchart, the general flowchart, and the detailed flowchart.[11] That same year Marilyn Bohl (1978) stated "in practice, two kinds of flowcharts are used in solution planning: system flowcharts and program flowcharts...".[12] More recently Mark A. Fryman (2001) stated that there are more differences: "Decision flowcharts, logic flowcharts, systems flowcharts, product flowcharts, and process flowcharts are just a few of the different types of flowcharts that are used in business and government".[13]

In addition, many diagram techniques exist that are similar to flowcharts but carry a different name, such as UMLactivity diagrams.

Building blocks[edit]

Common symbols[edit]

The American National Standards Institute (ANSI) set standards for flowcharts and their symbols in the 1960s.[14] The International Organization for Standardization (ISO) adopted the ANSI symbols in 1970.[15] The current standard was revised in 1985.[16] Generally, flowcharts flow from top to bottom and left to right.[17]

Other symbols[edit]

The ANSI/ISO standards include symbols beyond the basic shapes. Some are:[17][18]

  • Data File or Database represented by a cylinder (disk drive).
  • Document represented as a rectangle with a wavy base.
  • Manual input represented by quadrilateral, with the top irregularly sloping up from left to right like the side view of a keyboard.
  • Manual operation represented by a trapezoid with the longest parallel side at the top, to represent an operation or adjustment to process that can only be made manually.
  • Parallel Mode represented by two horizontal lines at the beginning or ending of simultaneous operations[17]
  • Preparation or Initialization represented by an elongated hexagon, originally used for steps like setting a switch or initializing a routine.

For parallel and concurrent processing the Parallel Mode horizontal lines[19] or a horizontal bar[20] indicate the start or end of a section of processes that can be done independently:

  • At a fork, the process creates one or more additional processes, indicated by a bar with one incoming path and two or more outgoing paths.
  • At a join, two or more processes continue as a single process, indicated by a bar with several incoming paths and one outgoing path. All processes must complete before the single process continues.[20]

Software[edit]

Diagramming[edit]

Any drawing program can be used to create flowchart diagrams, but these will have no underlying data model to share data with databases or other programs such as project management systems or spreadsheet. Some tools such as yEd, Inkscape and Microsoft Visio offer special support for flowchart drawing. Many software packages exist that can create flowcharts automatically, either directly from a programming language source code, or from a flowchart description language.

There are several applications and visual programming languages[21] that use flowcharts to represent and execute programs. Generally these are used as teaching tools for beginner students. Examples include Flowgorithm, Raptor. LARP, Visual Logic, and VisiRule.

See also[edit]

References[edit]

  1. ^SEVOCAB: Software Systems Engineering Vocabulary. Term: Flow chart. Retrieved 31 July 2008.
  2. ^Frank Bunker Gilbreth, Lillian Moller Gilbreth (1921) Process Charts. American Society of Mechanical Engineers.
  3. ^Graham, Jr., Ben S. (10 June 1996). "People come first". Keynote Address at Workflow Canada. 
  4. ^American Society of Mechanical Engineers (1947) ASME standard; operation and flow process charts. New York, 1947. (online version)
  5. ^Hartree, Douglas (1949). Calculating Instruments and Machines. The University of Illinois Press. p. 112. 
  6. ^Bashe, Charles (1986). IBM's Early Computers. The MIT Press. p. 327. 
  7. ^Goldstine, Herman (1972). The Computer from Pascal to Von Neumann. Princeton University Press. pp. 266–267. ISBN 0-691-08104-2. 
  8. ^Taub, Abraham (1963). John von Neumann Collected Works. 5. Macmillan. pp. 80–151. 
  9. ^Bohl, Rynn: "Tools for Structured and Object-Oriented Design", Prentice Hall, 2007.
  10. ^ abAlan B. Sterneckert (2003) Critical Incident Management. p. 126
  11. ^Andrew Veronis (1978) Microprocessors: Design and Applications. p. 111
  12. ^Marilyn Bohl (1978) A Guide for Programmers. p. 65.
  13. ^Mark A. Fryman (2001) Quality and Process Improvement. p. 169.
  14. ^ abcdefghijklmGary B. Shelly; Misty E. Vermaat (2011). Discovering Computers, Complete: Your Interactive Guide to the Digital World. Cengage Learning. pp. 691–693. ISBN 1-111-53032-7. 
  15. ^ abcdefghijkHarley R. Myler (1998). "2.3 Flowcharts". Fundamentals of Engineering Programming with C and Fortran. Cambridge University Press. pp. 32–36. ISBN 978-0-521-62950-8. 
  16. ^"ISO 5807:1985". International Organization for Standardization. February 1985. Retrieved 23 July 2017. 
  17. ^ abcFlowcharting Techniques GC20-8152-1. IBM. March 1970. p. 10. 
  18. ^ abc"What do the different flowchart shapes mean?". RFF Electronics. Retrieved 23 July 2017. 
  19. ^Jonathan W. Valvano (2011). Embedded Microcomputer Systems: Real Time Interfacing. Cengage Learning. pp. 131–132. ISBN 1-111-42625-2. 
  20. ^ abRobbie T. Nakatsu (2009). Reasoning with Diagrams: Decision-Making and Problem-Solving with Diagrams. John Wiley & Sons. pp. 68–69. ISBN 978-0-470-40072-2. 
  21. ^Myers, Brad A. "Visual programming, programming by example, and program visualization: a taxonomy." ACM SIGCHI Bulletin. Vol. 17. No. 4. ACM, 1986.

Further reading[edit]

External links[edit]

Wikimedia Commons has media related to Flow chart.
A simple flowchart representing a process for dealing with a non-functioning lamp.

0 Thoughts to “Currency Conversion Development Assignment Flow Charts Definition

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *