(iv) End-User Computing. Prior to 1982, the high cost and performance attractive-
ness of mainframe computers resulted in computing services, in most firms, being
provided centrally. In that manner, scarce IT resources (equipment and skilled labor)
could be more easily shared and fixed costs spread more widely. However, the de-
mand for new and modified applications outstripped the supply of system building
resources resulting in backlogs for development and modification stretching out two
to four years in many firms. Clearly this situation was unacceptable.
Information centers (ICs), introduced by IBM Canada in the early 1980s provided
some relief by making access to data easier. The notion behind an IC was that if ter-
minal equipment, powerful database query languages, and consulting assistance were
made available to users, they could produce their own reports, reducing somewhat
the demand for new systems and, hence, the backlog in systems development. But
ICs still used large central computers and expensive telecommunications.
In parallel with ICs, the first personal computers (PCs) were introduced into busi-
nesses around 1982. Used first for stand-alone (that is, unconnected to other com-
puters) text processing and data analysis work, PCs soon became interconnected
through local area networks (LANs) permitting resources (information and equip-
ment) to be shared and expanding the range of business related activities that could
be performed on them. As more PC applications were developed and the cost of
equipment decreased while performance increased, wide scale diffusion occurred.
Professionals performing functional activities, such as lawyers and accountants, be-
came skilled in the application of technology—purchasing packaged software, con-
figuring it to assist them in their work, and sometimes even writing their own pro-
grams in a macro-computer language. This shift in the locus of control and
knowledge about computing from centralized, professional support groups to end
users has changed fundamentally the technical power structure in many organiza-
tions. Rather than technology being the province of a small elite, it is, today, far more
widely dispersed.
This, in turn, has promoted a distributed architecture for technology infrastructure
and is particularly well suited to international firms. Often called client/servercom-
puting, this architecture divides an application into two portions: one that resides on
a workstation, called the client; the other running on another machine on the network
called a server.The advantage of this arrangement is that the client portion of the sys-
tem, which runs locally on a worker’s machine, is much smaller than the whole ap-
plication and it can be tuned to provide quick response for the worker. The server por-
tion, which is often the most complicated, need only be installed on one or a small
number of servers on the network. The client and server portions of the application
communicate with each other by sending messages over the network using a standard
protocol. Thus a server may be located in a regional processing center while clients
are running on workstations in many different countries, providing customized local
service (e.g., having display screens in local languages).
A key issue for management is the effective use and coordination of this distrib-
uted technology infrastructure. First off, client/server computing is a complicated ar-
chitecture. Then, there is a tensionbetween the freedomneeded to create innovative
applications that have truly beneficial effects, the coordinationrequired for interop-
erability (which permits resource sharing), and the support(consulting and mainte-
nance) necessary to leverage end-user technology activities. All of this becomes more
difficult internationally.
28 • 18 INTERNATIONAL INFORMATION SYSTEMS