Scheduling specifies when labor, equipment, and facilities are needed to produce a product or provide a service. It is the last stage of planning before production takes place.The scheduling function differs considerably based on the type of operation:
What makes scheduling so difficult in a job shop is the variety of jobs (or patients) that are processed, each with distinctive routing and processing requirements. In addition, although the volume of each customer order may be small, there are probably a great number of different orders in the shop at any one time. This necessitates planning for the production of each job as it arrives, scheduling its use of limited resources, and monitoring its progress through the system.
This chapter concentrates on scheduling issues for job shop production. We also examine one of the most difficult scheduling problems for services--employee scheduling.
There are many possible objectives in constructing a schedule, including
Job shop scheduling is also known as shop floor control (SFC), production control, and production activity control (PAC). Regardless of their primary scheduling objective, manufacturers typically have a production control department whose responsibilities consist of
Loading is the process of assigning work to limited resources. Many times an operation can be performed by various persons, machines, or work centers but with varying efficiencies. If there is enough capacity, each worker should be assigned to the task that he or she performs best, and each job to the machine that can process it most efficiently. In effect, that is what happens when CRP generates a load profile for each machine center. The routing file used by CRP lists the machine that can perform the job most efficiently first. If no overloads appear in the load profile, then production control can proceed to the next task of sequencing the work at each center. However, when resource constraints produce overloads in the load profile, production control must examine the list of jobs initially assigned and decide which jobs to reassign elsewhere.
The problem of determining how best to allocate jobs to machines or workers to tasks can be solved with the assignment method of linear programming.
The assignment method is a specialized linear programming solution procedure.Given a table of jobs and machines, it develops an opportunity cost matrix for assigning particular jobs to particular machines. With this technique, only one job may be assigned to each machine. The procedure is as follows:
Southern Cans packages processed food into cans for a variety of customers. The factory has four multipurpose cookers and canning lines that can pressure-cook, vacuum-pack, and apply labels to just about any type of food or size of can. The processing equipment was purchased some years apart, and some of the cookers are faster and more efficient than others. Southern Cans has four orders that need to be run today for a particular customer: canned beans, canned peaches, canned tomatoes, and canned corn. The customer is operating under a just-in-time production system and needs the mixed order of canned food tomorrow. Southern Cans has estimated the number of hours required to pressure-cook, process, and can each type of food by type of cooker as follows:
Cover all zeros:
Since the number of lines does not equal the number of rows, continue.
Modify the matrix:
Cover all zeros:
Since the number of lines equals the number of rows, we have reached the final solution.
Assignment models can be solved with POM for Windows. Solutions are given in terms of minimizing cost or maximizing profit, although the solution could represent minimized time, maximized quality levels, or other variables. The solution can be provided for minimizing the sum of assignment values or minimizing the worst value. The latter case, called the bottleneck problem, is useful in situations like Example 14.1 where machines may be operating simultaneously. In that case the completion time of a group of jobs is the maximum completion time of the individual jobs rather than the sum of completion times. The assignment method produces good, but not necessarily optimum, results when minimizing a maximum value.
When more than one job is assigned to a machine or activity, the operator needs to know the order in which to process the jobs. The process of prioritizing jobs is called sequencing. If no particular order is specified, the operator would probably process the job that arrived first. This default sequence is called first-come, first-served (FCFS). Or, if jobs are stacked upon arrival to a machine, it might be easier to process the job first that arrived last and is now on top of the stack. This is called last-come, first-served (LCFS) sequencing.
Another common approach is to process the job first that is due the soonest or the job that has the highest customer priority. These are known as earliest due date (DDATE) and highest customer priority (CUSTPR) sequencing. Operators may also look through a stack of jobs to find one with a similar setup to the job that is currently being processed (SETUP). That would minimize the downtime of the machine and make the operator's job easier.
Variations on the DDATE rule include minimum slack (SLACK) and smallest critical ratio (CR). SLACK considers the work remaining to be performed on a job as well as the time remaining (until the due date) to perform that work. Jobs are processed first that have the least difference (or slack) between the two, as follows:
SLACK = (due date - today's date) - (remaining processing time)
The critical ratio uses the same information as SLACK but arranges it in ratio form so that scheduling performance can be easily assessed. Mathematically, the CR is calculated as follows:
If the work remaining is greater than the time remaining, the critical ratio will be less than 1. If the time remaining is greater than the work remaining, the critical ratio will be greater than 1. If the time remaining equals work remaining, the critical ratio exactly equals 1. The critical ratio allows us to make the following statements about our schedule:
Other sequencing rules examine processing time at a particular operation and order the work either by shortest processing time (SPT) or longest processing time (LPT). LPT assumes long jobs are important jobs and is analogous to the strategy of doing larger tasks first to get them out of the way. SPT focuses instead on shorter jobs and is able to complete many more jobs earlier than LPT. With either rule, some jobs may be inordinately late because they are always put at the back of a queue.
All these "rules" for arranging jobs in a certain order for processing seem reasonable. We might wonder which methods are best or if it really matters which jobs are processed first anyway. Perhaps a few examples will help answer those questions.
The simplest sequencing problem consists of a queue of jobs at one machine or process. No new jobs arrive to the machine during the analysis, processing times and due dates are fixed, and setup time is considered negligible. For this scenario, the completion time (also called flow time) of each job will differ depending on its place in the sequence, but the overall completion time for the set of jobs (called the makespan), will not change. Tardiness measures the difference between a job's due date and its completion time for those jobs completed after their due date. Even in this simple case, there is no sequencing rule that optimizes both processing efficiency and due date performance. Let us consider an example.
Today is the morning of October 1. Because of the approaching holiday season, Joe Palotty is scheduled to work 7 days a week for the next 2 months. October's work for Joe consists of five jobs, A, B, C, D, and E. Job A takes 5 days to complete and is due October 10, job B takes 10 days to complete and is due October 15, job C takes 2 days to process and is due October 5, job D takes 8 days to process and is due October 12, and job E, which takes 6 days to process, is due October 8.
There are 120 possible sequences for the five jobs. Clearly, enumeration is impossible. Let's try some simple sequencing rules. Sequence the jobs by: (a) first-come, first-served (FCFS), (b) earliest due date (DDATE), (c) minimum slack (SLACK), (d) smallest critical ratio (CR), and (e) shortest processing time (SPT). Determine the completion time and tardiness of each job under each sequencing rule. Should Joe process his work as is--first-come, first-served? If not, what sequencing rule would you recommend to Joe?
Prepare a table for each sequencing rule. Start the first job at time 0 (since today is the beginning of October 1). Completion time is the sum of the start time and the processing time. The start time of the next job is the completion time of the previous job.
All the sequencing rules complete the month's work by October 31, as planned. However, no sequencing rule is able to complete all jobs on time. The performance of FCFS is either met or exceeded by DDATE and SPT. Thus, Joe should take the time to sequence this month's work.
Whether Joe sequences his work by DDATE or SPT depends on the objectives of the company for whom he works. The particular jobs that are tardy may also make a difference.
Are the preceding results a function of this particular example, or are they indicative of the types of results we will get whenever these rules are applied? Analytically, we can prove that for a set number of jobs to be processed on one machine, the SPT sequencing rule will minimize mean job completion time (also known as flowtime) and minimize mean number of jobs in the system. On the other hand, the DDATE sequencing rule will minimize mean tardiness and maximum tardiness. No definitive statements can be made concerning the performance of the other sequencing rules.
Since few factories consist of just one process, we might wonder if techniques exist that will produce an optimal sequence for any number of jobs processed through more than one machine or process. Johnson's rule finds the fastest way to process a series of jobs through a two-machine system in which every job follows the same sequence through two machines. Based on a variation of the SPT rule, it requires that the sequence be "mapped out" to determine the final completion time, or makespan, for the set of jobs. The procedure is as follows:
Fine Restorations has received a rush order to refinish five carousel
animals-an alligator, a bear, a cat, a deer, and an elephant. The
restoration involves two major processes: sanding and painting. Mr.
Johnson takes care of the sanding; his son does the painting. The time
required for each refinishing job differs by the state of disrepair and
degree of detail of each animal. Given the following processing times
(in hours), determine the order in which the jobs should be processed so
that the rush order can be completed as soon as possible.
The smallest processing time, 3 hours, occurs at process 2 for job C, so we place job C as near to the end of the sequence as possible. C is now eliminated from the job list.
The next smallest time is 5 hours. It occurs at process 1 for job E, so we place job E as near to the beginning of the sequence as possible. Job E is eliminated from the job list.
The next smallest time is 6 hours. It occurs at process 1 for job A and at process 2 for job B. Thus, we place job A as near to the beginning of the sequence as possible and job B as near to the end of the sequence as possible. Jobs A and B are eliminated from the job list.
The only job remaining is job D. It is placed in the only available slot, in the middle of the sequence.
This sequence will complete these jobs faster than any other sequence. The following bar charts (called Gantt charts) are used to determine the makespan or final completion time for the set of five jobs. Notice that the sequence of jobs (E, A, D, B, C) is the same for both processes and that a job cannot begin at process 2 until it has been completed at process 1. Also, a job cannot begin at process 2 if another job is currently in process. Time periods during which a job is being processed are labeled with the job's letter. The shaded areas represent idle time.
The completion time for the set of five jobs is 41 hours. Note that although Johnson's rule minimizes makespan and idle time, it does not consider job due dates in constructing a sequence, so there is no attempt to minimize job tardiness.
As sequencing problems grow in size and complexity, they become difficult to solve by hand. POM for Windows performs FCFS, SPT, LPT, SLACK, and CR sequencing for one-machine problems and Johnson's rule sequencing for two-machine problems.
In a real-world job shop, jobs follow different routes through a facility that consists of many different machine centers or departments. A small job shop may have three or four departments; a large job shop may have fifty or more. From several to several hundred jobs may be circulating the shop at any given time. New jobs are released into the shop daily and placed in competition with existing jobs for priority in processing. Queues form and dissipate as jobs move through the system. A dispatch list that shows the sequence in which jobs are to be processed at a particular machine may be valid at the beginning of a day or week but become outdated as new jobs arrive to the system. Some jobs may have to wait to be assembled with others before continuing to be processed. Delays in completing operations can cause due dates to be revised and schedules changed.
In this enlarged setting, the types of sequencing rules used can be expanded. We can still use simple sequencing rules such as SPT, FCFS, and DDATE, but we can also conceive of more complex, or global, rules. We may use FCFS to describe the arrival of jobs to a particular machine but first-in-system, first-served (FISFS) to differentiate the job's release into the system. Giving a job top priority at one machine only to have it endure a lengthy wait at the next machine seems fruitless, so we might consider looking ahead to the next operation and sequencing the jobs in the current queue by smallest work-in-next-queue (WINQ).
We can create new rules such as fewest number of operations remaining (NOPN) or slack per remaining operation (S/OPN), which require updating as jobs progress through the system. Remaining work (RWK) is a variation of SPT that processes jobs by the smallest total processing time for all remaining operations, not just the current operation. Any rule that has a remaining work component, such as SLACK or CR, needs to be updated as more operations of a job are completed. Thus, we need a mechanism for keeping track of and recording job progress. Recall that MRP systems can be used to change due dates, release orders, and, in general, coordinate production. Many of the rules described in this section are options in the shop floor module of standard MRP packages. Critical ratio is especially popular for use in conjunction with MRP.
The complexity and dynamic nature of the scheduling environment precludes the use of analytical solution techniques. The most popular form of analysis for these systems is simulation. Academia has especially enjoyed creating and testing sequencing rules in simulations of hypothetical job shops. One early simulation study alone examined ninety-two different sequencing rules. Although no optimum solutions have been identified in these simulation studies, they have produced some general guidelines for when certain sequencing rules may be appropriate. Here are a few of their suggestions:
In a job shop environment where jobs follow different paths through the shop, visit many different machine centers, and compete for similar resources, it is not always easy to keep track of the status of a job. When jobs are first released to the shop, it is relatively easy to observe the queue that they join and predict when their initial operations might be completed. As the job progresses, however, or the shop becomes more congested, it becomes increasingly difficult to follow the job through the system. Competition for resources (resulting in long queues), machine breakdowns, quality problems, and setup requirements are just a few of the things that can delay a job's progress.
Shop paperwork, sometimes called a work package, travels with a job to specify what work needs to be done at a particular work center and where the item should be routed next. Workers are usually required to sign off on a job, indicating the work they have performed either manually on the work package or electronically through a PC located on the shop floor. Bar code technology has made this task easier by eliminating much of the tedium and errors of entering the information by computer keyboard. In its simplest form, the bar code is attached to the work package, which the worker reads with a wand at the beginning and end of his or her work on the job. In other cases, the bar code is attached to the pallet or crate that carries the items from work center to work center. In this instance, the bar code is read automatically as it enters and leaves the work area. The time a worker spends on each job, the results of quality checks or inspections, and the utilization of resources can also be recorded in a similar fashion.
For the information gathered at each work center to be valuable, it must be up-to-date, accurate, and accessible to operations personnel. The monitoring function performed by production control takes this information and transforms it into various reports for workers and managers to use. Progress reports can be generated to show the status of individual jobs, the availability or utilization of certain resources, and the performance of individual workers or work centers. Exception reports may be generated to highlight deficiencies in certain areas, such as scrap, rework, shortages, anticipated delays, and unfilled orders. Hot lists show which jobs receive the highest priority and must be done immediately. A well-run facility will produce fewer exception reports and more progress reports. In the next two sections we describe two such progress reports, the Gantt chart and the input/output control chart.
Gantt charts, used to plan or map out work activities, can also be used to monitor a job's progress against the plan. Gantt charts can display both planned and completed activities against a time scale. In this figure, the dashed line indicating today's date crosses over the schedules for job 12A, job 23C, and job 32B. From the chart we can quickly see that job 12A is exactly on schedule because the bar monitoring its completion exactly meets the line for the current date. Job 23C is ahead of schedule and job 32B is behind schedule.
Gantt charts have been used since the early 1900s and are still popular today. They may be created and maintained by computer or by hand. In some facilities, Gantt charts consist of large scheduling boards (the size of several bulletin boards) with magnetic strips, pegs, or string of different colors that mark job schedules and job progress for the benefit of an entire plant. Gantt charts are a common feature of project management software, such as Microsoft Project.
Input/output (I/O) control monitors the input to and output from each work center. Prior to such analysis, it was common to examine only the output from a work center and to compare the actual output with the output planned in the shop schedule. Using that approach in a job shop environment in which the performance of different work centers is interrelated may result in erroneous conclusions about the source of a problem. Reduced output at one point in the production process may be caused by problems at the current work center, but it may also be caused by problems at previous work centers that feed the current work center. Thus, to identify more clearly the source of a problem, the input to a work center must be compared with the planned input, and the output must be compared with the planned output. Deviations between planned and actual values are calculated, and their cumulative effects are observed. The resulting backlog or queue size is monitored to ensure that it stays within a manageable range.
The input rate to a work center can be controlled only for the initial operations of a job. These first work centers are often called gateway work centers, because the majority of jobs must pass through them before subsequent operations are performed. Input to later operations, performed at downstream work centers, is difficult to control because it is a function of how well the rest of the shop is operating--that is, where queues are forming and how smoothly jobs are progressing through the system. The deviation of planned to actual input for downstream work centers can be minimized by controlling the output rates of feeding work centers. The use of input/output reports can best be illustrated with an example.
The following information has been compiled in an input/output report for work center 5. Complete the report and interpret the results.
The input/output report has planned a level production of 75 units per period for work center 5. This is to be accomplished by working off the backlog of work and steadily increasing the input of work.
The report is completed by calculating the deviation of (actual-planned) for both inputs and outputs and then summing the values in the respective planned, actual, and deviation rows. The initial backlog (at the beginning of period 1) is 30 units. Subsequent backlogs are calculated by subtracting each period's actual output from the sum of its actual input and previous backlog.
The completed input/output report shows that work center 5 did not process all the jobs that were available during the four periods; therefore, the desired output rate was not achieved. This can be attributed to a lower-than-expected input of work from feeding work centers. The I/O reports from those work centers need to be examined to locate the source of the problem.
Input/output control provides the information necessary to regulate the flow of work to and from a network of work centers. Increasing the capacity of a work center that is processing all the work available to it will not increase output. The source of the problem needs to be identified. Excessive queues, or backlogs, are one indication that bottlenecks exist. To alleviate bottleneck work centers, the problem causing the backlog can be worked on, the capacity of the work center can be adjusted, or input to the work center can be reduced. Increasing the input to a bottleneck work center will not increase the center's output. It will merely clog the system further and create longer queues of work-in-process.
The process for scheduling that we have described thus far in this chapter, loading work into work centers, leveling the load, sequencing the work, and monitoring its progress, is called infinite scheduling. The term infinite is used because the initial loading process assumes infinite capacity. Leveling and sequencing decisions are made after overloads or underloads have been identified. This iterative process is time-consuming and not very efficient.
An alternative approach to scheduling called finite scheduling assumes a fixed maximum capacity and will not load the resource beyond its capacity. Loading and sequencing decisions are made at the same time, so that the first jobs loaded onto a work center are of highest priority. Any jobs remaining after the capacity of the work center or resource has been reached are of lower priority and are scheduled for later time periods. This approach is easier than the infinite scheduling approach, but it will be successful only if the criteria for choosing the work to be performed, as well as capacity limitations, can be expressed accurately and concisely.
Finite scheduling systems use a variety of methods to develop their schedules, including mathematical programming, network analysis, simulation, and expert systems or other forms of artificial intelligence. Because the scheduling system is making the decisions and not the human scheduler, companies may find it difficult to purchase a system off the shelf that can embody their specific manufacturing environment or can be readily updated as changes in the environment occur. Finite schedulers are becoming more popular as software systems become more adaptable and easier to use and as manufacturing environments are simplified and are better understood. There are several finite schedulers available. One of the oldest is IBM's CAPOSS (Capacity Planning and Operations Sequencing System). ISIS, developed at Carnegie-Mellon, was one of the first schedulers to use artificial intelligence. Another prominent finite scheduling system is synchronous manufacturing.
In the 1970s, an Israeli physicist named Eliyahu Goldratt responded to a friend's request for help in scheduling his chicken coop business. Lacking a background in manufacturing or production theory, Dr. Goldratt took a commonsense, intuitive approach to the scheduling problem. He developed a software system that used mathematical programming and simulation to create a schedule that realistically considered the constraints of the manufacturing system. The software produced good schedules quickly and was marketed in the early 1980s in the United States. After more than 100 firms had successfully used the scheduling system (called OPT), the creator sold the rights to the software and began marketing the theory behind the software instead. He called his approach to scheduling the theory of constraints. General Motors and other manufacturers call its application synchronous manufacturing.
Decision making in manufacturing is often difficult because of the size and complexity of the problems faced. Dr. Goldratt's first insight into the scheduling problem led him to simplify the number of variables considered. He learned early that manufacturing resources typically are not used evenly. Instead of trying to balance the capacity of the manufacturing system, he decided that most systems are inherently unbalanced and that he would try to balance the flow of work through the system instead. He identified resources as bottleneck or nonbottleneck and observed that the flow through the system is controlled by the bottleneck resources. These resources should always have material to work on, should spend as little time as possible on nonproductive activities (e.g., setups, waiting for work), should be fully staffed, and should be the focus of improvement or automation efforts. Goldratt pointed out that an hour's worth of production lost at a bottleneck reduces the output of the system by the same amount of time, whereas an hour lost at a nonbottleneck may have no effect on system output.
From this realization, Goldratt was able to simplify the scheduling problem significantly. He concentrated initially on scheduling production at bottleneck resources and then scheduled the nonbottleneck resources to support the bottleneck activities. Thus, production is synchronized, or "in sync," with the needs of the bottleneck and the system as a whole.
Goldratt's second insight into manufacturing concerned the concept of lot sizes or batch sizes. Goldratt saw no reason for fixed batch sizes. He differentiated between the quantity in which items are produced, called the process batch, and the quantity in which the items are transported, called the transfer batch. Ideally, items should be transferred in lot sizes of one. The process batch size for bottlenecks should be large, to eliminate the need for setups. The process batch size for nonbottlenecks can be small because time spent in setups for nonbottlenecks does not affect the rest of the system. The following example illustrates these concepts.
The following diagram contains the product structure, routing, and processing time information for product A. The process flows from the bottom of the diagram upward. Assume one unit of items B, C, and D are needed to make each A. The manufacture of each item requires three operations at machine centers 1, 2, or 3. Each machine center contains only one machine. A machine setup time of 60 minutes occurs whenever a machine is switched from one operation to another (within the same item or between items).
Design a schedule of production for each machine center that will produce 100 A's as quickly as possible. Show the schedule on a Gantt chart of each machine center. Use the following synchronous manufacturing concepts:
The bottleneck machine is identified by summing the processing times of all operations to be performed at a machine.
Labor is one of the most flexible resources. Workers can be hired and fired more easily than equipment can be purchased or sold. Labor-limited systems can expand capacity through overtime, expanded workweeks, extra shifts, or part-time workers. This flexibility is valuable but it tends to make scheduling difficult. Service firms especially spend an inordinate amount of time developing employee schedules. A supervisor might spend an entire week making up the next month's employee schedule. The task becomes even more daunting for facilities that operate on a twenty-four-hour basis with multiple shifts.
The assignment method of linear programming discussed earlier in this chapter can be used to assign workers with different performance ratings to available jobs. Large-scale linear programming is currently used by McDonald's to schedule its large part-time workforce. American Airlines uses a combination of integer linear programming and expert systems for scheduling ticket agents to coincide with peak and slack demand periods and for the complicated scheduling of flight crews. Although mathematical programming certainly has found application in employee scheduling, most scheduling problems are solved by heuristics (i.e., rules of thumb) that develop a repeating pattern of work assignments. Often heuristics are imbedded in a decision support system to facilitate their use and increase their flexibility. One such heuristic2 used for scheduling full-time workers with two days off per week, is given next.
Employee Scheduling Heuristic:
Diet-Tech employs five workers to operate its weight-reduction facility. Demand for service each week (in terms of minimum number of workers required) is given in the following table. Create an employee schedule that will meet the demand requirements and guarantee each worker 2 days off per week.
The completed employee schedule matrix is shown next.
Following the heuristic, the first (5 - 3) = 2 workers, Taylor and Smith, are assigned Monday off. The next (5 - 3) = 2 workers, Simpson and Allen, are assigned Tuesday off. The next (5 - 4) = 1 worker, Dickerson, is assigned Wednesday off. Returning to the top of the roster, the next (5 - 3) = 2 workers, Taylor and Smith, are assigned Thursday off. The next (5 - 4) = 1 worker, Simpson, is assigned Friday off. Everyone works on Saturday, and the next (5 - 3) = 2 workers, Allen and Dickerson, get Sunday off.
The resulting schedule meets demand and has every employee working 5 days a week with 2 days off. Unfortunately, none of the days off are consecutive. By switching the initial schedules for Tuesday and Thursday (both with a demand of 3) and the schedules for Wednesday and Friday (both with a demand of 4), the following schedule results:
In this revised schedule, the first three workers have consecutive days off. The last two workers have one weekend day off and one day off during the week.
The heuristic just illustrated can be adapted to ensure the two days off per week are consecutive days. Other heuristics schedule workers two weeks at a time, with every other weekend off. Decision support systems can enhance both the scheduling process and the quality of the resulting schedule. A typical DSS for scheduling might:
Scheduling in a job shop environment is difficult because jobs arrive at varying time intervals, require different resources and sequences of operations, and are due at different times. This lowest level of scheduling is referred to as shop floor control or production control. It involves assigning jobs to machines or workers, (called loading), specifying the order in which operations are to be performed, and monitoring the work as it progresses. Techniques such as the assignment method are used for loading, various rules whose performance varies according to the scheduling objective are used for sequencing, and Gantt charts and input/output control charts are used for monitoring.
Realistic schedules must reflect capacity limitations. Infinite scheduling initially assumes infinite capacity and then manually "levels the load" of resources that have exceeded capacity. Finite scheduling loads jobs in priority order and delays those jobs for which current capacity is exceeded. Synchronous manufacturing is a finite scheduling approach that schedules bottleneck resources first and then schedules other resources to support the bottleneck schedule. It also allows items to be transferred between resources in lot sizes that differ from the lot size in which the item is produced.
Employee scheduling is often difficult because of the variety of options available and the special requirements for individual workers. Scheduling heuristics are typically used to develop patterns of worker assignment. Decision support systems for scheduling are becoming more commonplace.