Simulation - Queuing Systems Part I - EdsCave

Go to content

Main menu

Simulation - Queuing Systems Part I

Simulation

18 Dec 2017

While differential equations are the basis for modeling many types of physical and natural systems, many man-made ones can require some very different modeling paradigms. Systems involving waiting lines are one very common example.

Systems involving waiting lines, or queuing systems, can be seen everywhere in modern life.  Let's consider one common example – the drive-through at your local fast-food restaurant. For the purposes of this article we shall refer to this establishment as the Burger-Mat. The figure below shows a simplified bird's eye view of this system.


Figure - The Burger-mat Fast-food Drive Through


In this system, cars (hungry customers) enter at the lower left and head to the order window where the driver places their order.  After the order is placed, the car proceeds to the pickup window where the food is delivered. At that point the customer then leaves the system. This is the simplified view, which doesn't account for things like missing or wrong items, which would require the customer to park in the lot and go into the restaurant to try to correct the problem.  Still, even with all of these simplifications, there is still quite a bit of complexity in the system, especially when we consider what happens when there is more than one customer in the system at once. Some issues that complicate the model are:

What happens if a lot of customers show up in quick succession at the order window? They have to form a line (queue) and have their orders taken one at a time, which increases the amount of time needed to get the food.
Once customers have their order taken, they have to line up at the pickup window. One complication with this queue, however, is that there is only a finite amount of lane between the order and pickup windows, and therefore only room for so many cars to be waiting.  This is different from the case of the order window, which hypothetically could have cars backed up into the street and possibly nearby highways – giving the order window line an undefined and potentially very long wait.
If enough cars back up behind the pickup window, this could prevent cars from advancing from the order window and block the cars behind them – so even though he next customer in line is ready to place their order, and the order window clerk is ready to take it, this can't happen as the space in front of the window is blocked by the person who just placed their order.
And finally, the ability of the kitchen to process orders. If there is just one customer, the in-and-out service time is going to be dominated by how long it takes to take the order and fry up the burger.  With multiple customers in line and multiple orders being processed by the kitchen, the total service time experienced by a given customer gets less predictable.

With even the few the complications listed above, it becomes readily apparent that our Burger-mat is not the simple system it might appear to be at first glance. Even answering a simple question like 'how long will it take to get my burger' becomes quite difficult, especially when things get busy. Answering this question (and even more difficult ones) is where computer simulation becomes very helpful – and basically essential for analyzing even very simple systems that contain waiting lines.

Building a Queuing System Model

The first step in trying to develop a simulation model is to break down the system into primitive elements with well-understood behaviors. In the case of the Burger-mat, which can be viewed as a queuing system, the key primitive elements are queues and servers. As shown in the figure below, the queue primitive is what is used to represent a line of waiting customers, while the server is used to stand in for the entity which processes them  (such as the order window).


Figure - Line and Order window represented as abstract Queue and Server Combination


Queue Models
In its most basic form, the queue accepts abstract input tokens, which are held for some time, and then passed along downstream to the rest of the system. These abstract tokens may used in a simulation model to  represent  customers, orders, or a wide range of other entities which are to be processed by the system.  Despite its function as a 'waiting area' there are quite a few possible variations, however, in how a queue   manages incoming and previously queued entities. These variations in handling queued items are sometimes referred to as 'queueing discipline'. Some of the more commonly seen are:

Capacity
One of the key attributes of a queue is that of how many items can it hold - or its capacity.  An idealized queue may be defined as having either a finite or infinite capacity.  In the case of the Burger-mat, t

Identity
Queues may treat incoming items as being anonymous,  as having class distinction, or being uniquely identifiable.  Anonymous items posses no information or properties which distinguish them from any other items. In the case items having  class distinction, however,  items in one class may be distinguished from those of another, but items of the same class are treated identically.  Finally uniquely identifiable items may be thought of each having their own individual serial number stamped on them, and any item in the queue may be differentiated from any other item.  The choice of what degree of item identification to support is highly dependent on the system being modeled.  If one is modeling a process in which items really are identical – for example an assembly line making identical chocolate candies,  then the anonymous approach might make sense.  An example where supporting uniquely identifiable items would make sense would be in trying to model the patient intake and processing for a hospital emergency room.  In this case, the tests and treatments to be applied to any given patient will vary dramatically based on the particular patient – a very different scenario than the candy assembly line.

Priority
In the case where queued items are anonymous, the notion of priority doesn't really mean anything; since all items are treated equally, which particular items enter and leave the queue doesn't matter.  In contrast, consider the stereotypical case of the line in front of a trendy dance club. While people may line up in order of their arrival time, the bouncer decides who gets in based on his own criteria (and maybe how big a tip is presented) – and the exit order from the queue may be far different from the arrival order. For queuing systems in general, and not just of the night-club type, when items become identifiable,  this opens up the possibility of applying a wide range of queuing protocols.

First-In, First Out (FIFO): This is what most of us probably think of as the prototypical waiting line. The first people to arrive go to the head of the line and everyone gets into place based on arrival time.


Figure - First-In, First-Out (FIFO) Queue


Last-In, First-Out (LIFO): This is the 'stack' protocol – which can be most easily visualized as a stack of cafeteria trays.  The first to arrive wind up on the bottom, and the most recent and up on top. The removal order is exactly reverse where the ones on top exit the queue first and the ones on the bottom last (if at all).  


Figure - Last-In, First-Out (LIFO) Queue


Priority ordering: In a hypothetical ideal emergency room, the patients in the worst condition might be moved to the head of the line – someone with a gunshot wound will probably have a better chance of survival if they don't have to wait behind people with sprained ankles or the flu.  Priority queues are also apparent at airport boarding areas – your boarding order does not depend when you show up, but rather on the 'boarding zone' number printed on your boarding pass. In a queue with a priority queuing discipline, items move to the exit faster or slower based on their identifying characteristics.


Figure - Priority Queue (Lowest Numbers to Head of Line)


Defection
While there are a wide range of possible additional behaviors a queue can  support, one relatively common one is defection.  If an item (customer) gets tired of waiting, the queue may allow them to exit and go somewhere else. In the case of the Burger-mat, customers who spill out into the street waiting for the order window may be able to break out of traffic and go to another fast food place if the wait gets too long. On the other hand, the  fixed lane between the order and pickup windows may not allow frustrated customers to leave once they enter. In a local fast-food drive-though near me, however,  that latter lane is only defined by paint, and not a tangible barrier, so if someone does get tired of waiting for their burger, it is theoretically possible to leave for a competitor elsewhere.  Queues allowing for defection support modling this type of consumer behavior in a straightforward way.


Figure - Queue with Defection


Queue States
Because is a dynamic model, in the sense that it has persistent memory over time, at any point in time a queue can be completely described in terms of its state.  The state of a queue is simply its contents and their ordering, subject to any priority rules in effect.  In addition to the detailed description of its contents, it also makes sense in some simulation models to define certain derived meta-states for a queue. Some common queue meta-states can include:

Empty – the condition in which the queue contains zero items. When a queue is empty, it can't provide exiting items to any downstream system elements.

Full – the condition in which the queue has reached its capacity. When a queue is full, it can't accept entering items from upstream system elements.  Note that in the case where a queue has infinite capacity, it can never become full.

Partial – the condition where the queue is neither empty nor full. A queue which is partially full can both accept new items from upstream sources and provide exit items to downstream system elements.



Figure - Queue Meta-states (Empty, Partial, Full) and typical transitions



Server Models:


Servers are the other key element in queuing model systems. A server performs the following key operations:

1) Accepts or 'takes' an item from an upstream queue  . It can only do this is an item is available – erg. The upstream queue is not 'empty'.
2) Spends some time 'processing' this item.
3) Sends the 'finished' item downstream to next queue or other precessing element. The server can only perform this operation if the downstream element can accept the item – eg The downstream queue is not 'full'.
4) Repeats from step 1.


In the most basic case, the above-described server has three possible states –
starved, working, and blocked, which related to the above sequence steps. A server is said to be starved when it is ready to begin working on an incoming item, but no item is available; for instance, the queue feeding the server is empty.  After an incoming item has been received, the server enters the working state, and remains in that state for the duration of the processing time.  When the processing time has elapsed, the server will then attempt to move the processed item to its output queue. In the case that the output queue is full, and the server is incapable of delivering its processed item, it moves into the blocked state, where it will remain until space becomes available in the output queue.



Figure - Server States and typical transitions


In addition to the baseline behavior described above, there are a lot of possible variations on the basic server theme.

Processing time
The amount of time required by a server to process an item can be defined in many ways. Perhaps the most simple is when a constant processing time is assumed. In this case the server takes the same amount of time to process any given item regardless of the item type.  While this may not be a bad assumption in an assembly line, it can be a pretty bad one in many other systems. For example, in the Burger-mat, there is going to be quite a bit of difference in the amount of time required for the kitchen to fry up a Burger-mat-Deluxe to order versus scooping some fries out of the warming tray. In this case, the server (kitchen) processing time is item-dependent.  Another complication is that even though the sign out front may say 'Burger-mat', at least in 2018 there is a pretty good change that actual people are still manning the kitchen.  While the kitchen staff may have their job down pat, they will still not have quite the consistency of robots – and even for identical food items, there may be some variation in processing times. This variation is often modeled as a combination of  both a fixed and random processing time.

Setup Time
There are many situations, particularly in manufacturing environments, where setup time between items is important.   Setup time can be thought of as the amount of time required for a server to shift between performing one type of processing operation and a second processing operation. Setup time is not, however, needed between performing the same operation.  Consider the case of a drill press at a machine shop.  A drill press is used to drill holes in objects. To use one you have to install a drill bit of the desired size and clamp a holding fixture down to its work table to position the workpieces you want holes drilled in.  Consider the case where you have buckets containing two types of items 'A' and 'B' which both need different sized holes and require different holding fixtures.  If you set up the drill press for your 'A' type parts, drill them all, and then set up the press for the 'B' type parts, and drill them all, the work is going to go a lot faster than alternately drilling 'A' and 'B' parts and having repeat setups over and over.  In many real-work cases, the setup time required can far exceed the individual item processing time, so this can be a very important feature to model in a server if one wants an accurate simulation model.


Figure - Setup time when switching a server from one part type to another can dominate total processing time.


Item Transformation
Since it is possible to give items some degree of identity in a simulation model, one possible function of a server is identity transformation. At the Burger-mat, one such transformation might be for a server to accept an uncooked burger as an input and output a cooked burger, after a suitable interval on the grill.  Note that for simple queuing system models in which the identity of the queue implies the contents, it may be possible to just handle anonymous items throughout the system.


Figure - A Server can transform one incoming item type into a different outgoing item type


Merging and Routing

A server can also perform merging and routing operations.  This becomes important if the server has the choice of selecting items from a number of input queues (merging) or selecting which destination queue to send an item.  There are a wide variety of possible protocols, of which some of the more simple are:

Random – the server selects a random queue to take or send the item to

Round-robin – the server selects the next queue in a sequence. For example if there are input queues A, B,C, the server will try to take successive pieces from each of these queues in order, then repeat. In one variation, if one or more queues is empty, those queues will be skipped so the server is not starved.

Priority – Certain queues are used with a higher priority than others, with lower priority queues only used when the higher priority ones are empty or full.

Item-based. A server may send items with specific types or with specific characteristics to designated queues.



Servers may merge several item streams for several upstream queues




Assembly and Disassembly Operations.

It is also possible for a server to assemble or disassemble items in the course of its processing.  In a typical assembly operation, a server may take item type 'A' from one input queue, and item 'B' from another input queue, and produce an item 'C' which is sent to its output queue. A simple and concrete example of an assembly operation can be seen if we consider the kitchen at Burger-mat – to make a hamburger requires that a cooked burger patty be 'assembled' with a bun (and possibly other items such as cheese and lettice and mustard) before being passed along as a completed burger.  At a slightly more abstract level, the operation of the takeout window can also be seen as a server performing an assembly operation. In this case, completed orders (bags of food) are 'assembled' with the customer before the customer is sent on their way out of the system.

Servers may also disassemble single items into multiple ones. Although this may not be a very common operation to perform on physical objects, it may be extremely useful when handling abstract items. For example, the order window can be considered as a server that 'disassembles' incoming customers into two separate items: (1) a customer who's order has been taken and (2) their order. The customer and the order are then routed respectively into the pickup queue and the kitchen queue.

One notable characteristic  of an assembly operation is that the server needs all component items to begin its processing.  This can make for more complex starving/blocking behavior. For example, if one of the queues feeding an assembly server is empty, and another queue feeding a different part is full, the empty queue not only starves the server, but because both pieces are needed it actively prevents the other queue from passing a piece along, and can block an upstream server. Similar types of behavior can also be seen in disassembly operations where a full output queue can effectively cause servers downstream from the disassembly operation to become starved.


Figure - A Server may assemble items from multiple sources to yield a new item type.



Next Installment - Modeling the Burger-mat








 
Back to content | Back to main menu