TECHNOLOGY ASSESSMENT: A OGNTEXTUAL APPROACH. TO PLANNING ‘i‘hesis for the Degree of M. U. P. MkCHSGAN STATE UNIVERSITY DON L. CRAIG 1973 “ .h‘qxvmc .-‘ LIBRARY Michigan State University BmBmc av ‘E “ HMS & SflNS" . 300K BYNUERY lN-C ‘ LIBRARY BINDERS H SPIINSPORY. Illtllflll u ABSTRACT TECHNOLOGY ASSESSMENT: A CONTEXTUAL APPROACH TO PLANNING BY Don L. Craig A social contextual approach to technology assessment is presented in order to derive some relation- ships between the diverse but interconnected fields of technology, technology assessment, planning and decision making. It is premised that the utilization of technology and science has had many profound and unanticipated effects not only on cities and urban areas, but also on all of mankind's creations, including man's social and cultural inventions. It is then proposed that since technology and its effects are human inventions, man the producer has the facility to direct and control the use of technology through a process of analysis and evaluation called "technology assessment." The definitions and history of the concept of tech- nology assessment, as well as the roles of potential users, are explored. A discussion of the administrative problems and opportunities of the probable assessors emphasizes that Don L. Craig assessment functions involve both citizens and government in a range of forums from national to local in scope. The weaknesses of various methodologies for technol- ogy assessment are investigated and presented as reasons for a new integrative approach. Various conceptions of planning can serve as integrative techniques for the several method- ologies presented. This entails a resolution of differing theoretical concepts concerning "normative" and "determinis- tic" orientations in assessment processes; such a resolution is discussed. A synthesis of technology assessment, planning and public decision making is urged in order to facilitate the use of technology assessment as a tool of rational planning. The purpose, goals, and levels of endeavor of such a syn- thesis are explained. It is concluded that technology assessment can be of use on many planning levels, but operationally could perform most adequately at regional or higher levels. In particular, technology assessment could serve planning well as an informational system engendering both natural environ— mental information and social indicators, and as an advocacy forum with citizen inputs to technology planning. TECHNOLOGY ASSESSMENT: A CONTEXTUAL APPROACH TO PLANNING BY 1:; Don EH Craig A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF URBAN PLANNING School of Urban Planning and Landscape architecture 1973 , W (9 ACKNOWLEDGMENTS I should like to express my gratitude to the faculty of the Department of Urban Planning for their many efforts to see this project through to its completion. I would especially like to thank Professor Sanford Farness and Professor Keith Honey for the many hours spent on the review and commentary of this thesis. Gratitude is extended to the Michigan State Housing Development Authority for employment afforded me while at Michigan State University. I would also like to acknowledge my wife, Linda, without whose patience, understanding and skill this thesis could never have been completed. ii TABLE OF CONTENTS INTRODUCTION 0 O O I O O O O O O O O O O O O O O 0 Chapter I. II. Footnotes 0 O O I O O O O O O O O O O O O O CONCEPTUALIZATION OF THE TECHNOLOGY ASSESSMENT FUNCTION . . . . . . . . . . . . Definitions and History . . . . . . . . . . Definitions . . . . . . . . . . . . . . History . . . . . . . . . . . . . . . How Should Technology Assessments Be Used? Congress . . . . . . . . . . . . . . . State Agencies or Bodies . . . . . . . Other Policy Making Bodies . . . . . . Who Should Attempt Technology Assessments? Industry Initiated Technology Assessments . . . . . . . . . . . . . Government Responsibilities of Technology Assessment . . . . . . . . Administrative Problems . . . . . . . . Academic Institutions and Citizen Groups . . . . . . . . . . . . . . . Time Factors for Technology Assessment . . Footnotes . . . . . . . . . . . . . . . . . METHODOLOGIES OF TECHNOLOGY ASSESSMENT . . Methods Presently Utilized as Technology Assessment . . . . . . . . . . . . . . . Cost-Benefit Analysis . . . . . . . . . Environmental Impact Statements . . . . Technology Forecasting . . . . . . . . Deve10ping Methodologies-—Data Orientations and Requirements . . . . . . . . . . . . Social Indicators . . . . . . . . . . . Developing Methodologies-~Normative Processes . . . . . . . . . . . . . . . . Footnotes . . . . . . . . . . . . . . . . . iii Page 13 21 21 26 33 36 38 42 48 58 65 69 74 76 77 83 9O 97 100 104 120 Chapter III. IV. LIST OF Appendix TECHNOLOGY ASSESSMENT, PLANNING AND PUBLIC DECISION MKING O O O O O O O O O O O O O O The Planning Endeavors, Traditional Modes of Behavior and Technology . . . . . . . Comparison: Technology Assessment and Planning Processes . . . . . . . . . . . Purposes . . . . . . . . . . . . . . . Goals . . . . . . . . . . . . . . . . . Process Components--A Comparative Overview . . . . . . . . . . . . . . Significance of Multidisciplinarity . . Feasibility of Implementation . . . . . Technology Assessment Processes--Tools for Rational Planning . . . . . . . . . . . . Information Systems . . . . . . . . . . Adversary and Advocacy Processes . . . Levels of Endeavor--Geographical and Jurisdictional Considerations . . . . . . Footnotes . . . . . . . . . . . . . . . . . SUMMARY AND CONCLUSIONS . . . . . . . . . . REFERENCES 0 O O O O O O O O I O O O O O O A. A Technology Assessment Process . . . . . . B. A Generalized Planning Process . . . . . . C. D. Seven Major Steps in Making a Technology Assessment 0 O O O O O I O O O O O O O O O Adequate Technology Assessment Criteria . . iv Page 125 126 131 132 135 139 144 146 148 149 151 157 164 168 174 182 186 187 189 Table 1. LIST OF TABLES Page Initiation of Technology Assessments by Year . . . . . . . . . . . . . . . . . . . 37 Impact of the Environmental Quality Act of 1969, on Each Organization Community . . . 44 Methodologies Used . . . . . . . . . . . . . . 78 Figure 1. LIST OF FIGURES Page Components of an Adequate Technology Assessment 0 O O O O O O I O O O O O O O O O O 50 Structured Rationalization of Creative Action 0 O O O O O O O O O O O O O O O O O O O 107 The Nature-Man-Society-Technology System Broken Up into Six Bipolar Subsystems . . . . 109 vi INTRODUCTION In classical antiquity, Xenophon expressed a prevailing social attitude when he said in Book IV of the Oeconomicus, "What are called the mechanical arts carry a social stigma and are dishonored in our cities. For these arts damage the bodies of those who work at them or who act as overseers by compelling them to a sedentary life and to an indoor life, and in some cases to spend the whole day by the fire. This physical degeneration results also in deterioration of the soul." This descriptive device serves to illustrate the beginning focal point and problem area of this thesis--the continuing human use of technology and its innumerable effects. To reemphasize that technology has had a tre- mendous influence on human culture, values, standards, artifacts and social structures is not to belittle the importance or character of the situation. That man has long known the dual nature of science and technology is not a new idea. For centuries man has identified such technological "goods" and "bads"; however up until relatively recently in history man has chosen to enlarge his perspective of the "good" while narrowing his view of the "bad," and confining neither to a perspective of rationally directed change. Since the early nineteenth century man has however, begun to realize that technology operates within a larger social context and has innumerable effects upon the elements of the context. However, it has only been in the last fifty years that concern for such "systemic" and "synergistic" effects has gained ascendency over the purely technological good. Such efforts and vieWpoints have resulted in a call for "assessment" of technology--the central theme of this thesis. In particular, this study is concerned with the interrelationships of technology, technology assessment, planning and public decision making. The purpose of this thesis is to present a description of the many concepts and functions of technology assessment; to present a critical analysis of diverse methodologies used for technology assessment and their relationship to various types of planning; and to provide a statement and cumulative view for planning and technology assessment. The prime goal of this research has been to provide some insights as to how the process of technology assessment might be used in rational planning endeavors at various jurisdictional and geographical levels. The character of the research and results presented in these pages can be summarized as one of critical analysis and synthesis following from a period of literature search and review. The scope of the research and resulting conclusions can best be presented in a review of the purposes of each chapter. It is necessary to begin, however, with several major tentative broad definitions: Technology is the body of knowledge, precepts, concepts and lore that has been gained through the study of nature and through the experience of applications, especially of those utilizing the scientific attitude and method.1 Technology assessment refers to the identification of the effects (direct and derivative--immediate, inter- mediate and long term) and the evaluation of the social desirability or undesirability of such effects as related to particular technological applications.2 The usefulness and validity of these two central definitions are to be investigated while proceeding from the already well identified fact that technology has had a multitude of serious impacts on human society and culture. That this is an inherently true statement is premr ised in Chapter I and historically investigated by citing the changing definitions of technology assessment and the broadening social context of technological effects. Another major premise introduced in Chapter I is that the end products of all technology assessments are in the form of directed or non-directed information. This sets the back— ground for two other central investigatory efforts of Chapter I: I. For whom is the technology assessment information intended, and how is it to be used? 2. E22 should attempt technology assessment to yield such informational outputs? Also investigated in Chapter I is the very important aspect of time factors for technology assessment. Another important point that is discussed in this chapter is the concept of "adequate" assessments. This contextual view, denoting a choice of factors, is proposed in light of past experience and failure with "comprehensive" methods. Chapter II is used to investigate the changing perspective of technology assessment methodology. The methods presently being used for partial (incomplete) assessments will be reviewed and critically analyzed. Methods covered are cost-benefit, environmental impact statement (EIS) and technology forecasting. The emphasis here is to relate the methodological weaknesses to the inadequacies of past efforts of partial assessments. Also investigated in Chapter II are new developing methodologies that will be necessary if "adequate" technology assessments are to be realized. Two types of methodologies are investigated representing two major systems within which technology assessment must operate. The first involves the rational simulation of the natural environment and the second involves the development of a reliable set of social indicators. In the latter half of Chapter II an investigation of normative processes for technology assessment is devel- oped in order to pose two different theoretical orientations to technology assessment--a "cultural ecological" approach and a "normative“ approach. The relationship of each of these orientations to an abstraction of the term "planning" is presented as an element to reconcile theoretical differences. As a concrete example of a normative approach to technology assessment, a citizen involvement process is investigated as to structure, finance and motivation. In this thesis, Chapter III, entitled "Technology Assessment, Planning and Public Decision Making" will serve to synthesize and integrate some of the diverse and complex ideas of planning and technology assessment. First, some cogent points are made concerning the role of technology and technology studies in traditional modes of planning endeavor and attitudes. The investigation will center on recent changes in the planners' conception of technology and its place in planning. In this vein, a comparison of planning processes and technology assessment processes is undertaken. The emphasis here is placed on the interrelatedness of purposes and goals. Also proposed is a comparative overview of the components of each process (abstractions of each process are utilized rather than specific methodologies or levels of endeavor). A proposed matrix is employed to structure the comparison as to cost, time frames, the availability of competent staff, the intelligibility of the process and the compatibility of basic premises. The comparison is extended to include the feasibility of implementation in realistic situations of today. The second major division in Chapter III formulates a possible synthesis of planning and technology assessment. However, given the broad range of such a synthesis, two specific elements of the synthesis are investigated, tech- nology assessment as an information system for planning and technology assessment as an advocate's tool paralleling existing adversary processes in planning. Another major point to be investigated in this chapter is the viability of an optimum planning level at which to institute the technology assessment function. The contention investigated is that technology assessment can best be operated at regional scales or higher levels of planning. In the summary and conclusions, some of the major points evolved in the thesis are summarized, and some tentative conclusions are reached. It must be kept in mind that the major value of this thesis is that it hope- fully will stimulate research and testing of the validity of the claims presented herein. Footnotes 1National Academy of Sciences, Committee on Urban Technology, "Long Range Planning for Urban Research and Development," National Academy of Sciences, Washington, D.C., 1969, p. 2. 2Louis Mayo, "Technology Assessment, Scientific Method and Adversarial System," Program of Policy Studies in Science and Technology (Washington, D.C.: George Washington University, 1970), p. 4. CHAPTER I CONCEPTUALIZATION OF THE TECHNOLOGY ASSESSMENT FUNCTION Definitions and History Definitions Given that technology assessment as a process or method is little understood, little used and elusively changing, it might be essential to clarify the goals of this thesis by offering several definitional concepts of technology assessment. Assuming the premise stated in the introduction that technology assessment has utilized the techniques and premises of many other fields, the definitional concepts presented should touch on major academic fields or areas represented by those attempting technology assessment. This short presentation does not purport to be all inclusive, but merely a vehicle to set the stage for developing relevant meanings of the concept. Martin V. Jones, scientist, in his study of tech- nology assessment methodology has relied on definitions supplied by Gabor Strasser, formerly of the Office of Sci- ence and Technology. Mr. Strasser has defined technology 10 assessment as a "systematic planning and forecasting process that delineates Options and costs, encompassing economic as well as environmental and social considerations, that are both external and internal to the program or project in question, with special focus on technology related 'bad' as well as 'good' effects."1 Mrs. Vary Taylor Coates, a policy analyst, while citing "general agreement on what is meant by the term," phrases the definition to imply more of the social action- response mechanism than does Mr. Strasser. "Technology assessment implies identification of the social impacts or secondary consequences--both detrimental and beneficial--of a new technology or an existing technology; it also includes prediction of technological developments early enough to allow weighing of the relative social desirability of alter- nate lines of development. Technology assessment looks to both the prevention of secondary consequences harmful to the physical environment or to the quality of life, and the alleviation of existing environmental and social problems through exploitation of technological applications."2 Mrs. Coates' version of the definition is more socially oriented than that of Mr. Strasser, however neither is as systemi- cally explicit as that advanced by Mr. Clarence H. Danhof. Mr. Danhof advances the theory that as an operating tech- nique technology assessment has been perceptible throughout 11 man's history in a myriad of different ways. However, each perception of the technique has the following attributes. 1. Initiative in identifying a solution to a felt problem or an opportunity to gain a desirable objective, both of which require explanation of an area involving some unknowns. The application of expert, specialized knowledge to the problem at issue, so that possible gains and hazards can be defined as clearly as possible. The possibilities that a new technology may yield desired or undesired results or, frequently, both. The undesired consequences may affect the immediate user, a larger group, or all of mankind. Such undesired consequences may appear immediately, in which case cause and effect relationships are rela- tively easily identified; may emerge slowly, perhaps within the memory of a generation or two; or may require so prolonged a period of time as to be perceptible only in long retrospect. In the latter two cases, cause and effect relationships can usually be ascertained only by very advanced analytical techniques, if at all.3 Strasser, Danhof, and Coates would all agree howevor that a broad definition of technology is needed. For in- stance, Mesthene defines technology as "the organization of 12 knowledge for the achievement of practical purposes."” In addition, he says that technology means not only machines in the traditional sense of the word, but also "intellectual tools such as computer languages and contemporary analytic and mathematical techniques."5 Examples of such soft tech- nological innovation could be national health insurance programs, expanded public television programming, nation- wide pollution standards, etc. Perhaps the best perception of the social context of technology assessment and its role in that context is that presented by Louis H. Mayo. The theoretical premises advanced by Mr. Mayo would encompass those definitions already enumerated above. Mr. Mayo recognizes the dynamic environment in which technology assessment might operate: "The task of achieving a balance between technological progress and control of its undesirable side effects is part of the larger social problem of evaluating the policies, institutions, programs, and practices related to significant social needs. For various reasons, technology has been grasped as a convenient focus in the overall social process. Some applications are spectacular. Many applications involve major national issues in that they require a vast commitment of resources, or provide essential security, or perhaps threaten certain fundamental social values. Tech- nology is pervasive in its great variety of applications 13 throughout society. And technology provides a recognized measure of the intellectual advance of society. "If we assume that as a society we are now concerned with the establishment of deliberate, though moderated, control over the rate and direction of social growth, and that technological innovation is a significant ingredient in this process of change, then the technology assessment function is one means by which we can sort out the options and make policy choices which fit the prevailing notion of 'balance.‘ Since the basic purpose of technology assessment is to identify the full range of effects flowing from given application and then evaluating such effects in terms of the total spectrum of social values affected, the assessment function provides an indispensable input for policy deci- sions on balanced social development."6 History Mr. Mayo's statement describes the field within which technology assessment must operate--a broad canvas of social change. Given Mr. Danhof's definitional concept, then the word 'change' as used by Mayo connotes a historical framework for technology assessment, denoting past, present and future. The succinct recognition of Mr. Danhof is not revolutionary, having been expounded by many historians of technological progress: Lynn White, Medieval Technology and Social Change (1966);7 Lewis Mumford, Technics and Civiliza- tion (1963);8 Leo Marx, The Machine in the Garden (1968);9 14 Kranzberg and Purcell, Technology in Western Civilization (l967);1° and Peter Drucker, Technology, Management, and Society (1970).11 These historians have not however iden- tified technology assessment as a historical concept, relying rather on developing a humanistic analysis of the consequences of man, technology, culture and society interactions. Melvin Kranzberg, however, identifies the recently named concept of technology assessment in several periods in history. Kranzberg notes with wry humor: "Technology assessment as a limited art is nothing new. Simple assess- ment is close to the purpose of any innovation, even if only a mere guess that it will work to some good. It goes back to prehistory. We can imagine some forebear of homo sapiens picking up a stone to kill small game or to beat a neighbor-- or his wife--over the head. He had glimpsed the purpose in advance. He immediately confirmed the efficacy of the weapon, no doubt with grunts of delight."12 Kranzberg proceeds to say that throughout history assessments considered only first order consequences, noting that "only when random invention began giving way to system- atic innovation could technology assessment look much beyond "13 Still the assessments consisted of first order effects. expanding the realm of where a particular methodology could be applied. Not until the early 19th century with the 15 advent of the industrial revolution, did the concept of technology assessment broaden its base to include a larger realm of social consequences of material technology. Interestingly enough, the extension of social effects to technological causes was brought about by great stirrings concerning the diametrically opposed laissez-fai£e_doctrines and the writings of Marx and Engels. Kranzberg notes two major events that support this concept. "Although the factory legislation of the early 19th century was largely ineffectual and did little to stop the gross exploitation of workers, it marked an extension of the concept of tech- nology assessment to include the workers, their health, and their economic welfare. This legislation also brought a new factor into technology assessment--the government. Prevail- ing laissez-faire doctrines aside, the government intervened to mitigate some of the worst social consequences of unfet- tered industrialization."1“ Kranzberg proposes that Marxian theory had much to do with the socialization of technology assessment. "The man chiefly responsible for broadening the social Context of technology assessment was Karl Marx. He made plain one great truth: Technology has social and cultural ramifications far beyond the first order effects to which attention had hitherto been directed. What is more, Marx avoided the confusion between technology itself and the social system which it had so profoundly affected. . . . His 16 effort concentrated not on mitigating the effects of technology but on rearranging, by revolution, a socio- economic system which would enable the benefits of tech- nology to be spread among the masses rather than confined to the profit of a few."15 Both Mayo and Kranzberg would support the above cases as evidence of the emergence of government into larger fields of technology once dominated by free market systems. This is not to overlook the fact that government has always been involved in technology; for example, governments from the earliest historical times have been balancing the costs and benefits of military technologies. The contention here is, however, that the nineteenth century saw the influence of government extended to the regulation of civilian and industrial technologies. As specific cases one can cite several examples that illustrate the broad range of matters that in some substantial manner involved the assessment of technological effects and the government: 1. Laws regulating steam boiler construction, operation and inspection enacted in 1852 in response to numer- ous steamboat explosions between 1816 and 1848.16 2. Government interest and backing of John Wesley Powell's attempt to achieve a rational scientific basis for a conservation program in the western United States. His was a broad scale approach to 17 the combined impact of several technological systems (railroad, irrigation, etc.) and many special interests.17 3. The establishment of the Division of Economic Ornithology and Mammalogy in the Department of Agriculture in 1886, as a response to perceived adverse effects of civilization on wildlife and wildlife distribution.18 The government response to technological side effects was largely thwarted until well into the nineteenth century. In America, the industrialism supported by coal, steam and a burst of inventivenss, and motivated by the excitement of "progress" and personal gain, reflected a social attitude raised by a Constitutional right through the doctrine of "freedom of contract."19 Although the Interstate Commerce Commission was established in 1887 and the first Pure Food and Drug Act was enacted in 1906, many of America's more prominent technology based regulatory agencies and statutory measures to control technological applications were not established until well into the twentieth century.20 It must be seen that in America, as well as in other western countries, no doubt, the history of technology assessment has progressed from a narrowly defined tech— nological application to various natural processes, to one 18 recognizing the broad social and cultural implications of human invention. Today the government (federal, local and state) is seen to be working in the public interest (as narrowly or broadly defined to accommodate a particular purpose) in mediating between man and technology. The large number of participants, government bureaus, ad hoc committees, citizen groups, private industry, and academic institutions, simply impress upon one the intricacies of system interactions of men and technology, and the diffi- culties of amelioration of adverse effects. Given the pervasive influence of the federal govern- ment in the development and use of technology through various programs and agencies, a trend extrapolation of past historical events would tend to support more inter- vention and control on the part of the government. This in essence being a response to perceptions by both industry and society of the consequences of unbridled technological applications to social problems. The systems apparent in human invention of culture and society are becoming more complex not less, and less amenable to either technological fix (an attempt at a technological solution to a complex social problem) or single purpose and single discipline approaches (see Don Michael, The Unprepared Society (1968),3‘ Barry Commoner, The Closing_Circle (1971),22 Amatai Etzioni, The Active Society (1969),23 and Stanford Anderson (ed.), Planning for Diversity and Choice (1968)).2” 19 This response to the negative aspects of science and technology must be clarified as to the differing effects of "pure" science and humanly mechanized "vulgar technology." On one side of the issue, Herbert Marcuse attacks the philosophy of science noting that "science, by virtue of its own method and concepts, has projected and promoted a universe in which the domination of nature has remained linked to the domination of man--a link which tends to be fatal to this universe as a whole."25 Admiral Hyman Rick- over observing that (pure, not applied) "science is the antithesis of 'humanism'" makes a plausible distinction between science and technological effect. "Science, being pure thought, harms no one; therefore it need not be human- istic. But technology is action, and often potentially dangerous. Unless it is made to adapt itself to human interests, needs, values, and principles, more harm will be done than good."26 These comments support the concomitant view that science and technology throughout history have become more and more powerful and capable of irreparable harm, and government has increasingly recognized that, although through the free market system, technology is an avowed institutionalized process, a larger modicum of control and influence is needed (as shown by the historical precedents cited). As Franklin Huddle has so aptly recognized, "we 20 should not impair the dynamic vigor and creativity of science. But we should take steps to ensure that the logic '27 He further advises that "to of science is fully applied.‘ do this requires that the cause and effect relationships be sought out and exploited in the determination and achieve- ment of social goals through the systematic application of technology in the broadest sense, beyond the inhibitions of personal interest or private profit."28 In summary, it can be seen that the concepts of technology assessment are numerous and conceptually diverse. They have changed and developed since the earliest of his- torical times expanding with the types of technologies assessed; yet, throughout history each definition can be characterized by its conception of the social context of science and technology. Some definitions have no mention of a social context, while others premise it as a basic starting point. What has appeared to have evolved with the continued influence of government in technology is an assess- ment concept that is primarily socially oriented; that is, a conception of science and technology as social tools and processes, which as such should be both theoretically and ethically under the control of man. This paper assumes both this social context of technology, and a concomitant view that science and tech- nology are inherently value oriented processes, not the 21 purely objective disciplines they are so often assumed to be. This of course, is a theoretical and philosophical stance that is open to challenge. However, it serves to orient this paper; it should, however, be investigated fully and openly elsewhere, in order to evaluate the merits of this paper. Succinctly, the definition of technology assessment used in this paper is that technology assessment is a social process and method whereby the consequences of science and technology are derived, forecasted, evaluated, and planned. How Should Technology Assessments Be Used? According to the various conceptualizations of technology assessment and their intents, all are common in several respects--all such attempts have as their end pro- ducts information, directed or undirected and/or recommended courses of action or non-action. The following pages will examine the various types of bodies, agencies or individuals that would need and desire the informational outputs of technology assessments. The analysis will be directed to why the information is needed and how it would be utilized. Congress The Congress, the Senate and the House of Represente tives, is the highest legislative body to request an invest- igation of technology assessment in an effort to gain more 22 timely and useful information concerning science and technology. The mandate to acquire such information comes from the traditional role of the Congress to serve the public interest. Over the past one hundred and fifty years that public interest has been enlarged to include a monitoring of, and an intervention between man-technology interactions (see historical concept, Chapter I). The precedents for Congress making such decisions concerning the use of science and technology are numerous and widely known, the many consumer-related commerce and industry regulatory laws and military appropriations and budgeting serve as recent examples. Given this historical interest and responsibility of Congress in technology, what are the types of technology assessment information needed by Congress? Richard Carpenter has observed that the "Congress certainly does not lack for information. The openness of the legislative process provides a great variety and number of channels for facts and opinions. The public hearing is common to almost all legislative considerations."29 In reference to technological information needed by Congress, Mr. Carpenter stresses the use of directed politicized information in the sense of legislative action recommenda- tions. "The critical need of the Congress is to acquire the capability for assuming that competent and timely assessments 23 are done and for transferring assessment results into a form applicable to legislative decisions. Our technological problems are part of our political problems, with social, personal, and economic costs and benefits. The Congress is the political assessment body in our society and must have the output of technology assessments in order to do its job."30 The Congressional assessment entity, however embodied, will be responsible for initiating technology assessments, search out ongoing studies, structure hearings and citizen inputs, and review and assess assessment func- tions in private and public groups. Given this function, a moot point would be that of impartial information or judg- mental proposals. In other words, should the technology assessment body serve as an informational organ, or should it prepare action Option statements with a "best alterna- tive"? Present opinions held by the National Academy of Science would opt for the prior arrangement, giving the technology assessment entity no power or responsibility to act. ". . . Any new mechanism we propose must be care- fully insulated from direct policy making processes and responsibilities . . . any new assessment entity should be empowered to study and recommend, but not to act. It must be able to evaluate but neither to sponsor nor to prevent."31 This second view, the one held by the National Academy of 24 Science, is essentially the one adopted in the recently "32 which legislated "Technology Assessment Act of 1972, authorizes an Office of Technology Assessment to provide assessment information and "identify alternative techno- logical methods of implementing specific programs."33 The operating Office of Technology Assessment (OTA) is precluded however, from acting to prevent any technology from being used, operating any test facilities or promoting any par- ticular technology. It has been set up solely as an in- formational agency working on the premise that the action and decision making capacity is retained in the Congres- sional forum. There are no systematic mechanisms in existence as a part of the Act, or informally, for the acceptance of assessment information into the Congressional legislative organs. The operational principles for the OTA are defined in the act, but not rules of operation. However, the flow of activity might proceed as follows: 1. Requests for assessments would be submitted as provided in the law to OTA for implementation (requests would come from chairmen of Congressional committees or the Technology Assessment Board). 2. Assessment priorities would be assigned by the OTA in accordance with predetermined criteria and the assessment would be defined and formulated by the staff. 25 3. A contractor (or contract agency) would be selected by the OTA. 4. The assessment would be carried out by the con- tractor, monitored by the OTA staff, and a report would be written in close liaison with the OTA staff. 5. The results of the contractor's efforts would be evaluated by the OTA, and a summary report and analysis of the results would be prepared. 6. The summary report and analysis by OTA would be transmitted to the requesting committee, with or without recommendations, as appropriate. This type of operational process is really dependent upon the analytical abilities and management skills of the OTA staff and director; it also presumes that workable relationships are established between assessors, staff and congressmen. Note that in reality the operational process is very similar to the methods now utilized by ad hoc assessment groups: contractors to National Science Founda- tion, National Academy of Sciences, etc.‘ In summary, it can be seen that the Congressional need for technology assessment information will become in- creasingly critical as technical features of proposed pro- grams become more complex, pervasive, and intractable. In particular, the Congress is readily able to serve as the base of such an informational function for several reasons: 26 1. The widest possible base of information and opinion must be accessible to projects. The Congress could command this knowledge. 2. The political decisions affecting the future of technology rest with the Congress. 3. The Congress is sensitive and rapidly responsive to the people and is immediately accountable to the electorate (theoretically). 4. The feeling that applied science is under control (through Congressionally monitored assessments) will restore public confidence necessary to a risk taking progressive society.3“ State Agencies or Bodies The jurisdictional definitions of state agencies and legislatures are much smaller than the large comprehensive scope of the Congress. However, historically, technologies have been developed by individuals and industry with the purpose of distributing the innovation nationally. In addi- tion, the rapidity with which new innovative technologies are distributed throughout America is well documented by authors such as Alvin Toffler, Future Shock (1970)35 and John G. Burke, The New Technologygand Human Values (1972). Given this national scope of most technologies some state assessment efforts might be charged with repetitive use of technology assessment. However, it can also be argued that 27 many technologies, even if national in scope, are often more thoroughly and heavily applied in certain regions or states, given a diversity of geographical, cultural, eco- nomic and social needs. This stresses the need for tech- nology assessments on a state level to make legislators more aware of the particular consequences of technologies applied to unique, less aggregated levels. Assuming that technology assessment systems will produce the same types of information and/or options, albeit on a less aggregated level, there are several types of state bodies that can utilize such information. Naturally, a state legislative body analogous to the Congress could use a similar assessment body and information. It can be noted that "three legislatures . . . Kentucky, California and New York . . . have sought, with financial assistance from the National Science Foundation, to strengthen their ability to develop sources of technological intelligence independent of the executive branch of their governments.36 If this body, or a similar one, were to function well, it might provide such diverse agencies as the bureau of the budget and the office of economic devel- opment with comprehensive evaluations of technology oriented strategies. For instance, a budget recommendation to finance or not to finance irrigation and pesticide programs or state employees' health insurance programs might be more 28 credible and accurate if based upon an adequately performed technology assessment. A state economic development agency could better judge the feasibility of promoting and attract- ing certain types of industries if such decisions were based on technology assessments. A recent survey conducted by Peat, Marwick and Mitchell revealed that the program areas usually involved in assessments on a state level were health, safety and environmental problems--most of which related to land use planning. Other major problem areas under study were transportation and pollution control.37 Thus, it would appear that by the types of assessments initiated on a state level, a state planning function might serve to coordinate state assessments or house the technical staffs competent to perform such assessments. Given the far ranging effects of technologies applied in various state locations, technology assessment at the state government level would be more consistent with its ultimate purpose--the rigorousness of the comprehensive approach. This might be facilitated in several ways. The survey by Peat, Marwick and Mitchell showed a close parallel between the subjects of technology assessments done by the Federal government and state governments. Given that the Federal government invests large sums of money in the plan- ning and study of such items as regional planning and the 29 environment, then perhaps states might improve their capability for preparing assessments directly and indirectly through special federal programs which provide funds for planning and research, and through cooperative efforts with regional and district offices of federal agencies where special expertise can be tapped through c00perative arrange- ments, such as sharing data, personnel and facilities. As with many prior federal programs an initial investment of leadership, assistance and money in promoting technology assessment at the state level will probably result in independent assessment projects by the states. Another method for technology assessment at the state level might be a process of administrative review of regional and local technology assessment efforts. This particular mode of operation is premised upon a viable state planning process of review of local plans. A strengthened version of technology assessment, including approval or disapproval, might be of merit in such a process. It has been noted that states that do have a structure for technological advice can be classified as having one, or several, of six basic types. The basic types are: l. The Consultative Model-—advisory units to the governor or legislature. 2. The Managerial Model--technical capability in 30 the central budgeting and resource allocation units close to policy making leadership. 3. The Research Model--orientation towards targeted research and development as a strategy to prime innovation and economic growth. 4. The Mission Agency Model--mission agencies as primary vehicles for applying technology to state problem solving. 5. The Service Model-—service oriented arrangements to catalyze technology transfer or to furnish technology assessments. 6. The Network Model--network systems approaches to technology utilization.38 Although it is useful to speculate that the models might serve as different forums for technology assessments, it should be noted that the usefulness of each depends on competent personnel, legislative or institutional mandate, and adequate funding. Most of the approaches listed above have been used to promote technological options, not to assess the relative merits of particular technologies. Still other methods of technology assessment at the state level might be stimulated by the passage of state acts similar to the National Environmental Policy Act of 1969 and the Technology Assessment Act of 1972. The aim would be that all technologies applied using state monies or all 31 technological applications of significant impact, should have technology assessments performed. Several states have passed comprehensive environmental protection laws that require that environmental impact statements be filed for significant projects. California and Washington are states whose legislation requires such statements. In several instances, technologies have been assessed under these laws. A brief review of the above material would indicate that, in reality, there exists no systematic mechanism to perform technology assessments, or to handle the information provided by others. As of this writing, it appears that no state has a technology assessment law or a body conscien- tiously pursuing such a function. In short, "looking at the long record of state and local governments in reacting to new technology it is clear that these governments--like the federal government-—look mainly to their mission oriented agencies to be aware of, to evaluate, and to propose the use of new technology relevant to the agencies' statutory respon- sibilities, and less frequently, to propose the generation of new technology through research and development."39 No attempt is made at evaluation, only application is consid- ered. One such agency that has been charged with the technological advice function is that of state planning. However, in the past ten years the role, function and 32 credibility of state planning has reached a low ebb; many states have no systematic regular state planning effort at all, and in many states the function has been doled out to other mission oriented agencies.”° Thus, technology assess— ment as an added responsibility of the state planning process, as it now stands, would be an exercise in futility. The only possible merit of concentrating the assessment function in this type of office would be that theoretically it belongs here, and perhaps if national land use legisla- tion is passed, then state planning agencies will undoubt- ‘eflly have a stronger role and a greater voice in decision making. It would be difficult to propose for state legis- latures a technology assessment function similar to that enacted for the Congress. In general, it can be said that state legislatures are much less organized and capable of handling large amounts of increasingly technical information. The state legislature and its committees--encumbered by heavy agendas, high turnover, and short sessions-~must pass upon policies of a novel and sweeping character, exercise oversight of technical operations of state agencies, and write and vote on measures that have a high technical con- tent."1 As matters now stand, the legislatures must, for the most part, improvise arrangements for technical inputs. Therefore, the type of structure called for in a legislative 33 technology assessment program would not be particularly effective, given that it would need varied inputs from other well established legislative research services. In a positive vein, it must be noted that "legis- lative research units are increasing in size and number, and in many cases are placing increased emphasis upon specialized professional staff. Professionalization is clearly evident in the activities of the California Assembly Office of Research, the Connecticut Office of Policy Research, and the New York Legislature's Standing Committee Central Staff.“2 Other Policy Making Bodies Policy making bodies on many governmental levels, including local planning levels, can utilize technology assessment. Here the emphasis would be placed on the utilization of the information produced, rather than as an all encompassing decision making method. Local governmental bodies would be hesitant to adopt the latter approach due to factors of cost, expertise, public/private sector problems and bureaucratic inertia. The use of technology assessment as an information tool, even if the information comes from an assessment body at a higher level, is the most plausible reason for utilization at the local level. That local governments are at all involved in tech- nology assessments is derived from the mandate supplied by 34 the National Environmental Policy Act of 1969 (NEPA) and various state and local statutes. The local involvement is based primarily on the fact that many local projects are backed by federal money. In addition, there exists local citizen support for programs to assess impacts on the environment. Specifically,NEPA.provides that "all agencies of the Federal Government shall . . . (e) include in every recommendation or report on proposed projects for legisla- tion and other federal actions significantly affecting the quality of the human environment, a detailed statement by the responsible official on . . . (i) the environmental impact of the proposed action . . . [and] the responsible Federal official shall consult with and obtain the comments of any Federal agency which has jurisdiction by law or special expertise with respect to any environmental impact involved.“3 In reality, the mandate is carried to the local administration as an ideological influence or method, rather than an imperative policy. Realizing this and other mandates, a local or regional decision making body can ideally use technology assessment to encourage an Effective Public Decision Process (Policy Formulation and Program Implementation) by recogniz- ing that alternative solutions and alternative social states are determined by: 35 0 Participants (public and private sectors) with varying Perspectives (objectives, functions, and resources), 0 Operating within changing Social Contexts of Controlling Conditions and Trends, . Apply their Resources in Relevant Assessment Forums and Decisional Arenas in accord with Apprgpriate Strategies 0 So as to achieve Assessment Outcomes which will o Distribute Social Costs and Benefits in accord with the participants' preferences.““ The ultimate purpose for the proposal to use assess- ment methodologies on the local level would be to reorder the problem oriented outlook of the agency to one that is inclusive process oriented. Vary Taylor Coates proposes that one of the most significant effects of applying the contextual approach (the social contents of controlling conditions and trends noted above) to technology assessment will be a gradual shift from "one-factor-fix" thinking (legal, economic, or technological) to "problem context" and initiation-implementation-operations process thinking. The analytical implication of this shift will be, for example, "that with respect to proposals for new techno- logical applications, the relevant assesmment policy makers will consider means in terms of the total technological 36 configuration (the combination of facilitating and supporting resources through time--legal, political, economic, social, etc.) rather than in terms of the technology per se."“5 The crux of the question concerning technology assessment and public decision making actually revolves around whether localities would adopt technology assessment at all if statutory mandate and the need of monies did not require localities to engage in such processes. This is not to invalidate the process itself, but to indict the bureau- cratic inertia of our present decision making forums and the lack of leadership of the federal government. Again, Vary Taylor Coates stresses, ". . . even if we accept the 'muddling through' model as the accurate explanation of the operations of the existing, on-going public decision process, the analytical techniques of technology assessment surely offer the means of introducing a measurable increment of capability for controlling the direction and rate of social change.”6 Who Should Attempt Technology Assessments? The broad scope of the technology assessment func- tion has attracted quite a number of diverse assessment entities. Each assessor,or assessors,has had a particular reason for undertaking technology assessments and a specific way of going about them. This has been true not only in the 37 recent adoptive vogue of the process, but also throughout its long, if elusive, history. A recent survey by Peat, Marwick and Mitchell reveals not only the broad range of assessors over the past fifteen years, but the relative newness of its widespread usage (Table 1). Table l. Initiation of Technology Assessments by Year State and Federal Local Year Government Government Industry Institutions Universities 1955 .. .. l .. . 1960 .. .. l . . 1962 .. .. l .. l 1965 l .. 4 .. .. 1966 3 .. 2 .. l 1967 3 l l l 2 1968 8 l .. . 1969 8 4 5 3 2 1970 15 9 ll 7 6 Source: Peat, Marwick, Mitchell & Co., A Survey of Technology Assessment Today, Washington, D.C., June 1972, p. 14 I have chosen to aggregate assessors under a differ- ent classification than the Peat, Marwick and Mitchell study. I shall examine the subject of who should undertake technol- ogy assessment utilizing fourfold classifications--industry (to include business and private consulting firms); govern- ment agencies at the federal, state and local levels; 38 academic institutions; and citizen initiated technology assessments by groups whose interest is either localized or broader in scope, and whose participation is either problem oriented or technology oriented. Hopefully by using this aggregated form, it might shorten an overview of the participants in technology assessments and yet cover the vast majority of those engaged in the process. Industry Initiated Technology Assessments Recently there have been many loud and scathing denunciations of technology assessment as applied to indus- try and technical innovation."7 It seems clear that both industry and government recognize that many of the social ills wrought by technology can be accrued to the failure of private industry to assess the impacts of applied technology. The point of the matter is whether the tech- nology assessment function should be a sole responsibility of the government or should profit making industries par- ticipate also. So far, industry has expressed two viewpoints on technology assessment: the contract research outfits welcome it as a new source of direct business; companies whose prior experience indicates that assessment of anything leads inexorably to more stringent regulation fear it. Nina Laserson purports that "it seems clear that technology assessment ought to be performed by profit-making 39 organizations to the extent that it can (a) expose exploitable technological options, and (b) enable a cor- poration to anticipate restraints imposed by legislatures, regulatory agencies, and public pressure groups.“8 It appears that several federal acts, such as the Clean Air Act of 1963 and the NEPA law of 1969 will also serve as the mandate for industry initiated techniques demanded in part (b) above. Industry has initiated tech- nology assessments because of cost and time factors seen if proposed government regulation came about. Older federal laws and agencies such as the Food and Drug Administration, the Federal Aviation Administration and several other regulatory agencies have in reality forced partial assessments of products and techniques by forcing industry to conform to certain minimum performance standards and design standards. As an example, industries have tested drug products in full realization that they must be able to pass FDA's minimum testing standards. However, the history of the technology assessment function in industry has shown that such efforts have been narrow in scope and profit-maximizing in character. "Market analysts have long been competent in assessing economic impacts; corporation lawyers are skilled at assessing legal implications; the aerospace industry has led the way in instituting the systems concept of 'product effectiveness' 40 which includes the assessment of all the qualities of a product that interest the customer. But businessmen have been slow to address the questions of public and political acceptability.”9 One could purport that industry will undertake technology assessment because it would not only aid in determining marketability, but also achieve such altruistic purposes as increased product safety and the feasibility of long run cultural, economic and social costs. In reality a typical response of industry to technology assessment would be its espousing the idea for the sake of deterring govern- ment interference. A recent attitude has been, ". . . stricter regulation is inevitable. But if we allow tech- nology to go unassessed much longer, the kind of statutes we will wind up with will be much more severe, much more Draconian, and much less open to creativity than the kinds of regulation that will emerge if industry cooperates in efforts to sensitize the government through technology assessment."5° The movement toward stricter statutes is already apparent in recent trends in legal branches such as contract, tort, and property law, emphasizing that industry will have to assume more and more responsibility for the adverse consequences of their activities. To date industry attempts at technology assessments have been rather self-serving and narrowly focused. To a 41 large extent industry has not been required to adhere to minimum assessment standards, publish their data, make information available to the government or other parties, or broaden the scope or funding of their studies. Don H. Overly observed, "industry while acknowledging the need for technology assessment, really emphasizes technology fore- casting-~that is, trying to determine what technologies will, under certain conditions, be available in the future. This information while useful in predicting competitors' positions, government R & D policies, and possible market or technological opportunities to exploit still permits benefits (beneficiaries) and costs (benefactors) to remain unacknowledged."51 In some instances industry initiated technology assessments would have a particular advantage over other assessor groups. This situation concerns the types of information that industries are often privy to, in the sense that they know more about certain patented and copyrighted techniques and processes. However, over the long run most university groups and some governmental agencies at the federal level are as well equipped to perform adequate technology assessments. 42 Government Responsibilities of TechnoIogy Assessment Since most big technological programs involve the federal government, and since the government does have an obligation to respond to the public, it would seem the logical first home for a technology assessment capability (see Chapter I, history of technology assessment). It can also be assumed that governments at other levels have the same or greater responsibility to the public, but have only a smaller constituency and analytical and jurisdictional purview. These distinct entities have in large part shaped the response of the federal and other governmental levels to the technology assessment need. The NEPA law of 1969 and the environmental impact studies it requires are effective in forcing agencies to collect information necessary for technology assessments, in providing experience in multidisciplinary consideration of secondary consequences of actions and projects, and in providing a mechanism for public review of executive decision making, NEPA thus serves as a strong stimulus to the development of the technology assessment process in the executive agencies. In addition, Peat, Marwick and Mitchell in a recent study of technology assessment have indicated that the Environmental Quality Act of 1969 has had a measurable, although not significant, effect on a 43 broad spectrum of technology assessors and their activities and policies (Table 2). The recent Technology Assessment Act of 1972 has provided an informational forum for the members of the Congress as they consider a wide variety of technology- related bills. The reports of both the National Academy of Science and the National Academy of Engineering recommended that technology assessment activities be performed at several governmental focal points within the executive and legis- lative branches. These three devices in concert have served to stimulate state, regional and local governments to consider and undertake technology assessments, with the procedures adopted at the federal level serving as process models. In addition,several states have passed acts similar to NEPA, some being more stringent and well defined than the national law, others being less defined and more amorphous in content. Ostensibly the environmental impact statements that have been called technology assessments, must have been filed with the Council of Environmental Quality, with the Environmental Protection Agency serving as the prime review body for the government. Vary Taylor Coates, who has done an extensive study of the technology assessment function in the federal govern- ment presents the following precise overview of who is 44 .Oh u Hmuou «mucmvcommmu mo amass: n 28 .mOH .m .memH mass ..o.o .aoumcHnmmz .wwooa usmEmmOmm¢ mmOHoosome mo NObuom d ..00 w HHmnoqu .HOHsumz .umom Sony pmummpd “condom m H .. .. H H match ou OHchD @ H .. H .. v uomwuw w>mmn >Hw> 0H v m H w m uommwm mumumcoz MH m m .. o m pow pommmo oz mOHOHHom Homo: so muommwm H .. .. .. H .. watch on mHnmsD m H H H m e uommmm m>mma sum> om m m H v v uommmm wumuwooz OH H H . . o N pom nommmm oz mOHuH>Huoa «a mo muowmmm Smuzc AQHuE Hmuzc ATuE :anv «SHuzc Hmuoa mGOHuouHumcH mmHuHmHmbHco whumoccH ucmssum>ow usmscuw>oo Hmooq Hmumomm cam mumum muHcsEEOO coHuouHcmmuo zoom :0 momH mo uo< wuHHmoa HmucwscouH>sm ogy mo pommSH .N mHnma 45 responsible for such studies. "Eighty-six offices in federal executive agencies were identified as chiefly responsible for projects and programs of a technological nature. These offices were located in seven cabinet-level departments, nine independent agencies, eight commissions, and four components of the Executive Office of the President (defense and security agencies were excluded). In these 86 offices, extensive interviews showed that 24 percent were concerned only with primary performance characteristics of technologiCal systems and their direct dollar costs. Sixty- three percent perform or sponsor some technology assessments; the bulk of these are partial or narrow assessments which take into account some of the secondary consequences of technological application, most often the secondary economic impacts or environmental impacts. The remaining 13 percent of the offices consistently perform or sponsor technology assessments and regard technology assessment as their major responsibility."$2 V. T. Coates further reports that "in the offices where it is performed or sponsored, technology assessment is viewed as support for agency planning and programming or as ancillary to substantive, basic and applied research programs."53 When examining the technologies assessed and the methodologies used, several contradictory tendencies are 46 detected. V. T. Coates notes that the subject matter of technology assessments are not well defined, but appear to be chosen out of necessity or convenience. She iden- tifies three major areas: 0 technology related to basic human needs: food and fiber technology, housing technology, biomedical technology, water resources technology, 0 technology critical to an industrial society: power technology, mineral resources technology, tran3porta- tion and communication technology, and - technologies over which the federal government exercises a unique degree of control, largely because of astronomically high costs of research and develOpment and their derivation from earlier military applications, and space and nuclear power technology.5“ Coates also reported that "engineers, economists, and physical scientists make up the bulk of the staff of offices which perform and sponsor technology assessments" and that "most technology assessments rely heavily on the collation and judgmental analysis of existing information, along with field studies in the case of planned projects."55 In addition,the Peat, Marwick and Mitchell study shows that 38 percent of the total methodologies used were either fore- casting or expert opinion and intuitive analysis, the latter 47 being heavily relied upon.56 These examples appear to show a bias towards certain methodologies and a lack of multi- disciplinarity. At other levels of government the responsibility of performing technology assessment varies with existing state and local laws. States have been the prime assessors on these other levels. Agencies, departments, and program offices for 31 states reported 83 subjects being assessed for technological or related impact, according to the Peat, Marwick and Mitchell survey. It was also found that state governments' technology assessments were originated because of state or regional issues related to land use planning or economic development.57 It has also been reported that the assessment func- tion was delegated to operating mission-oriented agencies, as often as not under the aegis of the governor. The importance of the state and local assessments can readily be seen if one observes that on the average (median) assess- ments at this level took 24 more man-months to complete and required $77,000 more than similar federal assessments.58 The range of assessment subjects was also as broad and well formulated as those on the federal level. A particular example is a recent technology assess- ment project coordinated through the Office of State Plan- ning in Michigan. The study was conducted by a multidisci- plinary university group in conjunction with several state 48 planners. The assessments covered the following topics-- solid waste management, cable communications systems, the Wankel engine, energy and land use, noise, assessment methodologies, early child development in education, and civil liberties and data processing systems.59 Administrative Problems All government agencies engaged in technology assessment are faced with certain problems concerning the administration and conduct of the assessments. These prob- lems are in part concerned with the methodology and costs of the project, but in large part have to do with the coor- dination of those actually performing the task and the mesh- ing of the goals proposed with the methodologies utilized. In other words, these questions involve the desired scope of the project versus the reality of methodological and procedural constraints. The question of first importance concerns that of the scope of the technology assessment and adequacy of the assessment. Martin Jones succinctly recognizes that an ". . . assessment study should strive to make as broad an analysis of impacts as possible--the bad as well as the good, the indirect as well as the direct, the delayed as well as the immediate, economic, social, environmental, political, legal, etc, effects on bystanders as well as on target groups or participants, etc. There are, of course, many 49 reasons why assessment studies will often be something less than total assessments. Constraints of time, money, and available talent are among these reasons. Other reasons for restricted assessments are the parochial interests, the restricted mission responsibilities, and the narrow vision of organizations that sponsor some research studies."60 Thus a desirable objective would be to favor "total social impact" statement over one that would be partial and narrowly directed. A more viable concept, given the types of restrictions offered by Mr. Jones, would be to accomplish an "adequate" technology assessment. Figure 1 offers, in a diagrammatic manner, a description of the com- ponents of an adequate technology assessment. If assessors address each of these steps in turn, viewed as minimum criteria, then progress toward adequate technology assessments will be made. It must be realized that the requisite skills needed to perform each of these steps are often absent in many assessment entities; and they are also constrained by the scarcity of adequate information. Yet these steps can still serve in the evaluation of any technology or technological system. Using these basic steps as a functioning network, then assessors might narrow the scope of the study according to parameters such as risk, purpose, impact levels, documentation, differentiation, time period covered, ranges of groups impacted upon, etc. 50 mezm2mmmmma wooHozmome masseuse fnofism mfi 3 336.58 “monsomv .ucmsmmwmm< mmoHocsoma mumsumpc as no mucosomsoo .H OHsmHm muHcoafioo unmammmmmm mHonz can now .OHumamumhm pom HMHDmOH .Emummm HmcoHpMEuomcH mumswmnd mofiwsom OsHm> HOHOOm O>HumcHOuHm cu OOGOHOMOH nuH3 muommfiH HOHOOm ousH mpommmm £05m msHunw>soo How mmstcnoma mucmeconH>cw HMHOOm muouom oucH msoHuOOHHmmm HMOHOOHosnomu mo COHuODUOHucH ecu EOHM moHson muommmm mo GOHuMOHMHucth mHmpoE HMHOH>M£OQ HMGOHumuHameo Hman>HocH mo HomEQOHO>mQ mGOHumHOUHmsoo HMOHmOHocsomu O>HumcumuHm mo :mHmOo mucofimmcmnum HmHOOm mhsuom maHummomnom CH wHHme m>oumEH 51 Government entities responsible for technology assessment must also make decisions as to who will perform the actual assessment once a decision is made to proceed with the process. The agencies have several distinct choices as to assessor, each with unique advantages and disadvantages. The most obvious group to perform technology assess- ments for any government agency, would be those members of the in-house staff. That large numbers of agencies choose to perform technology assessments and other studies utilizing these personnel denotes some real advantages to the agency. Such studies can be found: 0 to offer greater credibility for the agency management, 0 demonstrated the likelihood of producing institutional change in the agency, 0 individual assessors were protected from constituency pressure by bureaucratic anonymity; o the data base remains available to the agency,' 0 in-house expertise is develOped and maintained; 0 assessment activity can be flexibly scheduled in terms of time, resources, and workload61 [i.e., the keeping of all of the assessment functions within the agency would produce savings in time and costs of coordination of activities]. 52 Technology assessments produced by in-house groups also have some inherent difficulties or disadvantages: 0 lack of multidisciplinary staff in most offices, 0 relative lack of external credibility, 0 possible institutional bias, 0 ease of suppression of assessments by administration displeased by the findings or implications.62 Charles V. Kidd, in a general statement, has criticized technology assessment in federal agencies on the grounds of biased constituency representation: ". . . any assessment of the effects of technological development done by any agency is likely to be both biased and limited. The bias derives largely from the constituency of the various agencies. The Department of Agriculture cannot be expected to give as much weight to the general environmental hazards generated by use of pesticides as it does to the immediate increase in costs of producing agricultural products that would result from banning their use. Agencies represent interests, and this is a fact more to be recognized than deplored.63 This criticism can also be leveled at state and.local governments relying on mission-oriented or regula- tory agencies to perform the assessment tasks. One method in which to combat charges of bias and narrow outlook is to assign the task of assessment, or parts of it, 53 to contractors. The advantages of technology assessment performed by contractors are: less institutional bias and greater objectivity, greater external credibility, more disciplines can be used than are present in most agency offices, the regular work of the agency staff can proceed without interference. Concomitant disadvantages of contractor groups are: severe difficulties of coordination and management when agency and contractor are geographically separated, contractors tend to tell agencies what the agency wants to hear (as the contractor perceives it), contractor reports can also be ignored or suppressed by agency management.6“ It might be emphasized that these drawbacks are inherent with contractor-client arrangements, i.e., the same criticisms can be leveled against contract planning firms, contract accounting firms, etc.; thus these criticisms are not endemic to technology assessment functions. Another fault of assessment by contractors lies in the representa- tion of affected parties; given that the assessment task is divided between agency and contractor for the sake of multi- disciplinarity, then technology affected parties are not 54 well represented in this fragmented responsibility chain. The contractors usually would not give much heed to the constitutency of the agency and even less to non-associated parties. Other difficulties of contractor assessments concern the develOpment of methodology expertise and initiation of adequate data bases. If these duties are relegated to the contractors and the agency chose not to use the same con- tractor again, the data base would probably be expunged and the expertise developed would remain with the contractor rather than the agency. This is true in that most contrac- tors prefer to keep the intricacies of their analytical methods secret in order to be competitive on the con- tractors' market. Many technological problems and opportunities do not arise within the jurisdictional limitations of single agencies; science and technology developments often do not coincide with the functional governmental frameworks established for altogether different purposes. In con- sideration of these trends, it has been proposed that technology assessments be performed as a cooperative effort among differing agencies; the corresponding advantages of such an arrangement would be: 0 may have high level of visibility and influence, depending on level of personnel assigned to them. 55 0 provide opportunity for continuing monitoring and assessment, 0 provide opportunity to coordinate and rationalize policies of several agencies. Offsetting disadvantages of interagency assessments: 0 difficult to initiate because of lack of sponsoring authority, 0 avoided because of conflicting agency missions, responsibilities and interests, 0 agency vieWpoints and interests are seldom over- ridden, especially if tasks of analysis are divided among participating agencies.65 Charles V. Kidd further decrys the use of inter- agency cooperation to obtain any specific output--technology assessments or other problem solutions. "The capacity of peer organizations in the United States government [one might add agencies at all governmental levels] to resolve conflicts or to solve problems by cooperative efforts which they initiate and carry out without external influence is strictly limited and in inverse proportion to the signifi- cance of the problem." He further states: "not only will agencies tend to disagree on many issues involving juris- dictional issues, philosophical views, political matters such as relationships with constituents and Congressional 56 committees, but they will at times tacitly ignore such problems or fail to attack them vigorously."66 Vary Taylor Coates informs us that "blue-ribbon panels" of experts from outside the government, especially from industry and universities, are sometimes convened to conduct assessments, especially those focused on societal problems related to technology. The advantages of using expert panels are: 0 they allow mobilization of expertise from many sources at low cost. . they tend to have high visibility, prestige, and influence. 0 they offer the possibility of co—opting powerful segments of society for support of policies or decisions emerging from the assessments. 0 they allow representation of affected interests. There are some critics who have polemicized against the use of expert panels in judgment of science and technol- ogy, presuming an insurmountable bias of the technologists. Comments of Harold P. Green are illustrative of this approach. "I am distrustful of experts--scientists and engineers have a bias in favor of accomplishing what they think can be accomplished. This assumption that the prob- lem of effective social control will take care of itself at an appropriate time is politically incorrect. In a 57 government whose Executive and Legislative branches are committed to achieving the benefits of science and tech- nology, excessive reliance is placed on the judgment of experts because of the unfounded myth that ordinary mortals are incapable of understanding the issues."67 Other disadvantages of expert panels are specifically: 0 show a tendency toward conservatism in approach to problems. 0 analysis may lack continuity, diligence and consistency.68 Vary Taylor Coates in her study of the technology assessment function in the federal government, stresses two points that impinge upon the ability of governmental bodies to adequately perform this function. Each relates to the necessity of substantial administrative support from the Congress and the Executive. She urges: 1. "Future research must be upgraded and emphasized to allow improved forecasting of technological innovation and application, improved anticipation of possible impacts, and improved understanding of the alternative social contexts in which these trends may be experienced [to anticipate problems before they become urgent and encourage alternative technological plans in advance of immediate needs]. 58 2. The demand for technology assessment from the agencies should be substantive rather than procedural."69 These suggestions were offered in light of the evidence that oftentimes, the goals of multidisciplinarity and comprehensiveness are sacrificed to compromises of political and governmental procedural acceptability. See the Council of State Governments, Power to the States (1972)70 and Todd LaPorte, "The Context of Technology Assessment: A Changing Perspective for Public Organization" (1969).71 When considering the real possibility of the assess- ment process being coerced by political compromise, the need for either a very independent agency or a group of assess- ment bodies arises. This would only be accomplished by direct support of the agency or agencies by both the Con— gress and the Executive, with adequate funding forthcoming from the Congress. Academic Institutions and Citizen Groups Given the previous discussion of technology assess- ment processes it is perhaps appropriate to examine how academic institutions, including research bodies and the like, and citizen groups could perform such functions. In most instances, academic institutions and research bodies perform technology assessments under the 59 direction and aegis of governmental bodies or private industry, with the universities usually relying on govern- ment contracts, and the research bodies relying on industry. Given how the assessment function is performed in each of these bodies, either could be considered to be, at any point in time, either an expert panel or a contracted group. Then each would have the distinct advantages and disadvantages associated with that type of group (see pages 53-56). If they are very similar to other groups performing technology assessments, then what advantages do they possess over other well qualified groups and why should they attempt such functions? Academic institutions (universities) are unique in our society in that they are the largest organizations where pure and applied science research is one Of the mainstays of their existence. Basic knowledge about the physical uni- verse and our social and cultural systems is developed in the university which serves as a repository for this knowl- edge in both written form and in the form of experienced researchers. That this situation is true, is very important for the technology assessment function for in order to predict alternative futures and determine effects on the human environment, assessors must be able to determine present 60 states and norms. Such determinations can be accomplished through basic research on the environment--physical, natural and socia1--to determine rates of change, base values and measurement parameters. This research in turn could be classified in a systems framework similar to such classi- fications as: "technologies," "technological systems" and "supporting systems." The research required could be initiated by the requirements of the particular technology assessment; how— ever, considering the costs involved, the most feasible and potentially useful method would be that the research be carried on through a technology assessment monitoring system. This is important because technology assessments should not be delayed until adequate information bases can be assembled or exact explicit methodologies derived. It is necessary that the assessment function be attempted now with con- tinuing research serving as an innovator and supplier of an ever-increasing data base. More importantly, it is necessary that the univer- sity undertake technology assessments, either by contract to agencies or by encouraging its faculty to participate in such forums, in response to the perhaps universally accepted axiom that universities are always in the forefront of sig- nificant social change and evaluation. Hugh Folk believes "the university can make essential contributions to the 61 creation of responsible technological debate, just as it has had to debate on social and economic policy in the past. Responding to the demands for 'relevance' emanating even from places so unlikely as schools of medicine and engi- neering, the university can organize itself to educate both the assessors and the counter assessors in the values, goals and aims of a human society, in the tools of social analysis, in the technological and scientific possibilities which both motivate and constrain human action."72 Until recently it has been highly unlikely that academic institutions or research bodies could initiate technology assessments without monetary support from govern- ment or industry. Yet now there is the possibility that student and citizen groups similar to Public Interest Research Groups could raise certain quantities of money and initiate assessments on their own. University staff could serve on these research bodies gratis or be paid, the important aspect being that the initiating body is no longer the government or industry, but one that represents a wider constituency-—a goal of the process itself. However, such citizen/university combinations face one dominant probleme—funding. The Peat, Marwick, Mitchell & Co. survey reported that the average cost of the univer- sity technology assessments was S150,000.00, usually much more than citizen groups can raise. If the merits of the 62 citizen/university alliance are to be realized, imaginative new methods of funding will have to be developed. Academic institutions have a credibility and status that government and contractual groups do not possess. This is due in large part to the insulation afforded the faculty and researchers by the institutional framework. These persons owe loyalty to their academic pursuits.and the structure and raison d'etre of the university itself, not to any contractor because of means of living. Researchers in private research institutions or "think tanks" do not have this type of immunity. They must contract with the government agencies or private industry in order to find the dollars to run the establishment, and must in reality rely on the continuing favor and acceptance of their products (research reports) by government and industry. Given their dependence on contracted studies to pay the bills, they would be much less likely to undertake a low- paying or unpaid assessment in conjunction with a citizens group. In reality, however, academic institutions have to deal with the problem of institutional bias as do expert panels of any sort. In other words, academicians are often under the pressure to hold opinions similar to those of their fellow academicians. This may take the form of loyalty to accepted views of an academic specialty, i.e., 63 to anthropology, sociology, physics, etc., or bias towards the views held by university administrators--role of the university; role of professors, students and community; etc. Academic institutions do have two unique advantages that make them compelling choices for technology assessment duties. 1. In performing a technology assessment and devel- oping an assessment methodology, they are able to build upon and adapt existing methodologies that can later be used by a variety of groups and individuals. In developing this methodology and using graduate students, they train skilled personnel in its conceptualization and use. Such trained graduate students would then be conceivably able to transfer this specialized knowledge to other fields of endeavor as relevant processors of decision making and experimental research. This will presumably transfer a greater knowledge of technology to the public. 2. Secondly, the academic institutions can serve as readily accessible sources of stored knowledge and informa- tion concerning technology and technology assessment. This is in contrast to private research institutions that might be hesitant to supply such information because of their competition in the research institution market. Usually the information stored at universities is available to a larger number of people and affected groups... 64 Others have felt that these advantages of the university ought to be developed and changed to comply with the complexities of society--technology interactions, to develop responses to the dynamic "problematique" of such interactions. Erich Jantsch believes that "the university ought to become society's strategic center for investigating the boundaries and elements of the recognized as well as the emerging 'joint system' of society and technology, and for working out alternative propositions for planning aimed at the healthy and dynamically stable design of such systems."73 This suggestion would alter the framework of the university from one of orientation and training to one that is action oriented and non-compartmentalized. This would be a merging of the present research, education and service functions of the university. Jantsch's proposal would bring the following basic changes: 1. Principal orientation toward socio-technological systems design and engineering at a high level, leading to emphasis on general organizing principles and methods rather than specialized knowledge, both in education and research. 2. Emphasis on purposeful work by the students rather than on training. 3. Organization by outcome-oriented categories rather than by inputs of science and technology, and emphasis on long range outcomes.7“ 65 If this university structure or another is used, the academic institutions will continue to be a valuable source of technology assessments both in performance and production of qualified assessors. Time Factors for Technology Assessment Time factors are very important in the pursuit of a technology assessment function. The problems relate to the dynamic nature of technological application itself; science, social and cultural milieus, affected parties are all changing rapidly, often at different rates and in dif- ferent modes of complexity. Technology assessment must be able to adjust to such situations diachronically and synchronically. Basically, technology assessment recognizes two relevant time frames through which the types of technologies or technological systems are to be assessed. Such entities to be assessed derive either from a perceived problem or a prospective problem situation. The former corresponds to a response to problems engendered by the past use of a tech- nology, so is retrospective in character. The latter is necessarily projective and futures oriented, often utilizing (projecting) the "best alternative future or future impact" based on intuition, empirical research or other methods. Actually, both time reference characteristics recognize and utilize data from the past and present, and 66 project into the future. They differ in amount of data from the past. For example, a problem projective situation would rely mainly on information from the immediate present, given that the thing to be assessed has not existed in the past and records of interactions with society would not exist. It is also anticipatory in nature, not relying on a crisis situation to initiate the assessment function, as does the first time frame reference. The first type is as was said, precipitated by a problem situation, usually one that has reached crisis proportions, then a somewhat dis- couraging "ad hocery" method is often utilized to reach decisions. Given that some technological impacts will arise only through continued application of a technology, and that assessments made throughout the application will necessarily be based on the information available at that time, then a time dimension must be incorporated in a workable approach to technology assessment. Louis H. Mayo addresses this problem as follows. "Assuming that 'one shot,’ total prob- lem assessments are needed (which they are), it is not at all evident that such efforts are feasible with reference to certain applications at particular times. The assessment system simply may not have all of the necessary subsystems to produce the essential data, or the data may be available but there may still exist no mechanism within the assessment 67 system for assembling and analyzing the full data input."75 If then this is the case, an iterative process through time is perhaps best. Mr. Mayo thus proposes "the alternative concept of a total problem assessment through time should be considered with outputs of the various subsystems being cranked into the continuing assessment as feasible. Such continuing approximations to a total problem assessment would be responsive to changing social demands and to new data develOped previously recognized and significant inter- actions in the social subsystem affected by the application."76 This is essentially a concept of technology assess- ment "further down the road" which implies technology assessment as a monitoring device. As suggested, the process would be a dynamic evaluative one building knowledge, setting performance parameters and flexible control mechan- isms. The operative place of this type of function could be all of the proposed institutions for technology assessments. If we accept Mr. Mayo's total assessments through time or any other method, we are yet faced with the problem of how long the individual assessment process itself should last. In other words, how long should it take to reach decisions, action Options or decision points. If technology assessment as a decision process or as an aid to decision making is to be useful, it should be accomplished within discrete time periods, with specific 68 scheduling and performance goals to be met. However, the time allotment for the process should allow sufficient time for repeating all the steps several times. This is a neces- sary quality control factor since each step in making a technology assessment study is closely linked to every other step, and insights obtained in completing later steps may frequently necessitate revising judgments in completing earlier steps. It can readily be assumed that structural mechanisms for the administration of technology assessment are extremely important in designing the overall process, yet the methodological tools with which to accomplish assessment goals must not be overlooked. Such tools can be considered to be the operational methodologies utilized to measure and define the impacts of technology on human and natural systems. The necessity of these tools perhaps leaves us with a need to investigate and analyze both existing and proposed methodological approaches to such tools. 69 Footnotes 1Gabor Strasser quoted in Martin V. Jones,.§ Technology Assessment Methodology--Some Basic Propositions (Washington, D.C.: The Mitre Corporation, 1971), p. 2. 2Vary Taylor Coates, Examples of Technology Assess- ments for the Federal Government (Program of Policy Studies in Science and TechnoloQY;gwashington, D.C.: George Washington University, January 1970). P9 2. 3Clarence Danhof, "Assessment Information Systems," Technology Assessment: Understandingthe Social_gonsequences of TechnoIOgIcal Applicafiions, edl by Raphael G. Kasper (New York: Praeger, 1972), p. 8. “Emmanuel G. Mesthene, Technology Chan e: Its Impact on Man and Society_(Cambridge, Mass.: Harvara University Press, 1970), p. 25. 6Louis H. Mayo, "Commentary on Paper by Dr. Frederick Seitz," Harmonizing Technological Developments and Social Policy in America, ed: by James C. Charlesworth (Monograph’ll, American Academy of Political and Social Science, Philadelphia, December 1970), P. 175. 7Lynn White, Medieval Technolo and Social Change (New York: Oxford University Press, 1366). 8Lewis Mumford, Technics and Civilization (New York: Harcourt, Brace & World, 1967). ’Leo Marx, The Machine in the Garden (Oxford: Oxford University Press,‘l§687. 1°Melvin Kranzberg and Carroll W. Purcell, Jr., eds. Technology in Western CiviliZation (New York: Oxford University Press, I967). 11Peter F. Drucker, Technology, Management, and Society (New York: Harper and Row, Publishers, Inc., 1970). 12Melvin Kranzberg, "Historical Aspects of Technology Assessment" (paper presented at the Engineering Foundation Research Conference in Andover, New Hampshire, August 1969; Program of Policy Studies in Science and Technology; Washing- ton, D.C.: George Washington University, 1969), p. l. 70 13Ibid., p. 2. 1“Ibid., p. 6. 15Ibid., p. 7. 16Louis H. Mayo, "The Management of Technology Assessment," Technology_ Assessment: Understanding the Social Conseguences of Technological Applications, ed. by RaphaEIIG. Kasper (New York: Praeger, I972), p. 75. 17Ibid., p. 76. 18Ibid. 1’Ibid., p. 77. See discussion in Arthur Miller, "Toward the Techno-Corporate State: An Essay in American Constitutionalism," Villanova Law Review, XIV, 1 (Villanova, Penn.:e Villanova University Press, 1968), pp. 35-37. 20Mayo, "The Management of Technology Assessment," p. 78. 21Donald Michael, The Unprepared Society (New York: Basic Books, 1968). 22Barry Commoner, The Closing Circle_(New York: Alfred A. Knopf, 1971). 23Amatai Etzioni, The Active Society (New York: Free Press, Collier-MacMillian, I968). 2"Stanford Anderson, ed., Planning for Diversity and Choice (Cambridge: M.I.T. Press, 1968). 25Herbert Marcuse, One Dimensional Man (Boston: Beacon Press, 1964), p. 166. 26Hyman Rickover quoted in Leon Green, Jr., "Technol- ogy Assessment or Technology Harassment: The Attacks on Science and Technology," Technology Assessment: Understand- the Social Consequences ofT Technological Application, ed. 5y Raphael G. Kasper (New York: Praeger, 1972I, p. 213. 27Franklin P. Huddle, "The Social Function of Technology Assessment," Technology Assessment: Understand- ing the Social Consequences of Technological A1 lications, ed. ’by Raphael G.IREsper (New York: Praeger,l 2), p. I63. 71 2°Ibid. 2’Richard Carpenter, "Technology Assessment and the Congress," Technology Assessment--Understandinggthe Social ConsequencesIOIITechnoIogicaIAp‘IicaEions, eHLIby RaphaeI G. Kasper (New York: Praeger, 1972), p. 33. 30Ibid., p. 37. 31U.S. Congress, House, Committee on Science and Astronautics, Technolo : Processes of Assessment and Choice (Report of the Nathnal Academy of Sciences; Washington, D.C.: Government Printing Office, July 1969), pp. 80 & 82. 32Public Law 92-484, 86 Stat. 797 (1972). 33Ibid., p. 2. 3“Carpenter, op. cit., p. 37. 35Alvin Toffler, Future Shock (New York: Random House, 1970). 36The Council of State Governments, Power to the States: Mobilizing Public Technology (Lexington, Kentucky, May I972), p. 79. 37Peat, Marwick, Mitchell & Co., A Survey of Technol- ogy Assessment Toda (Washington, D.C.: Prepared f6r the NaEIOnal Sc1ence Foundation, 1972), p. 21. 38The Council of State Governments, op. cit., pp. 61-65. 39Ibid., p. 67. “°Don L. Craig, "Perspectives of State Planning" (unpublished paper, School of Architecture and Urban Plan- ning, Michigan State UniVersity, 1972), pp. 8-12. I”The Council of State Governments, Op. cit., p. 121. “21bid., p. 129. “3Public Law 91-190, 83 Stat. 852 (January 1, 1970). ““Vary Taylor Coates, Some Im lications of the Tech- nolo Assessment Function for the ngective Public Decis1on Making Process (Program of PoIicy Studies in Science and Tgchnology; Washington, D.C.: George Washington University, June 1971), p. 12. 72 “SIbid., p. 18. “51bid., p. 26. .”Leon Green, Jr., "Technology Assessment or Technology Harassment: The Attacks on Science and Tech- nology," Technology Assessment: Understanding the Social Consequences ofT TedhnoIogical Application, ed. by Raphael G. Kasper (New YorEI' Praeger, 1972). I"’Nina Laserson, "Technology Assessment at the Threshold," Innovation No. 27 (January 1972), p. 23. HFranklin Huddle, "Social Management of Technolog- ical Consequences," The Futurist (February 1972), pp. 16-18. soLaurence Tribe quoted in Laserson, op. cit., p. 25. 51Don H. Overly, ”Societal Indicators and Technology Assessment," The Methodology of Technology Assessment, ed. by Marvin J. Cetron and Bo o Bartocha (New Yofk: Gordon & Breach, 1972), p. 70. 52Vary Taylor Coates, Technolo and Public Policy: The Role of Technology Assessment 1n ederal Government (Program of Policy StudiesI1n Science and Technology; Washington, D. C.: George Washington University, June 1972), pp. 8-9. 53Th£§., p. 9. 5"_I_bfl.,. p. 27. 55933., pp. 13-14. 56Peat, Marwick, Mitchell & Co., Op. cit., p. 100. 57113131., pp. 11, 18, 19. “Trig.” p. 13. 5’"Technology Assessment," Comprehensive Planning Project, Office of State Planning, Lansing, Michigan, 1972. "It appeared that this report was a prime example of loosely coordinated multidisciplinary studies; it exhibited incon- clusiveness and a glaring lack of planning vieWpoints. Clearly this is a case where a well structured administra- tive and operational mechanism is needed for assessment." 73 60Martin V. Jones, A Technology Assessment Methodology: Some Basic Propositions (The Mitre Corporation (Washington Operations); Prepared in c00peration with and for the Office of Science and Technology, Executive Office of the President, Washington, D.C., June 1971), p. 30. 61V. T. Coates, Technology and Public Poligy, p. 10. 621bid. 63Charles V. Kidd, "Technology Assessment in the Executive Office of the President," Technology Assessment: Understanding the Social Consequences of Technological Application, ed. by Raphael G. Kasper (New York: Praeger, 1972), p. 127. suv. T. Coates, TechnologyAand Public Policy, p. 12. 65Ibid. 66Kidd, op. cit., p. 129. 67Harold P. Green, "The Adversary Process in Tech- nology Assessment," Technology Assessment: Understanding the Social Consequences of TECHnoIogical—Application, ed. by Raphael G. Kasper (New York: ’Praeger, 1972), p. 58. 68V. T. Coates, Technology and Public Poligy, pp. 13-14. ' 69Ibid., p. 41. 7o . The Council of State Governments, op. c1t. 71Todd LaPorte, "The Context of Technology Assessment: A Changing Perspective for Public Organization," Public Administrative Review, January/February 1971, pp. 63-74. 72Hugh Folk, "The Role of Technology Assessment in Public Policy," Technology and Man's Future, ed. by Albert H. Teich (New York: St. MartihTs Press, 1972), pp. 253-254. 73Erich Jantsch, Technological Planning and Social Futures (London: Cassell/Associated Business Programs, 19725. pp. 229-230. 7“Ibid., p. 230. 75 Mayo, "The Management of Technology Assessment," Ibid. CHAPTER II METHODOLOGIES OF TECHNOLOGY ASSESSMENT Having discussed the purposes, institutional framework and systematic Operation of technology assessment, it is important that an analysis of "how" this process oper- ates be fully explained. This is not to sublimate the substantive aspect of the process to the methodological, but to present a balanced examination of how to achieve the end products of technology assessments. An overview of these methodologies is presented not as a means of advocating their use in technology assessment, but as examples of the rich mixture of methodologies and analytical vieWpoints from which technology assessors might choose. This is with the full realization that no method discussed in this thesis is the perfect or most desired .methodology, but that a synthesis of some type, as presented in Chapter III, is preferable to the narrow espousal of a particular methodology over another. It must be realized that not all methodologies for technology assessment are represented and discussed in these pages; a sampling device was employed to provide examples of assessment methodologies conceived within the four realms upon which technology 74 75 impinges--society, culture, man, and nature. The theory and substantive aspects of each methodology will be examined, and its relationship to technology assessment deduced. This chapter will be divided into essentially two sections-~one dealing with present methodologies used for technology assessments, and another concerning developing methodol- ogies. Procedurally this analysis will consist of an evaluation of existing methodologies and some subjective ideas concerning the transformation of these existing methods to conform to normative evaluation and planning processes. As emphasized in the first chapter, technology assessments of the past have been partial assessments at best. Examination of both governmental and private attempts at the process reveal many shortcomings. Essentially, the majority have been disjointed analyses of one or more fac- tors perceived as critical by very specialized analysts usually representing only a small number of disciplines. In other words, there have been few attempts at holistic approaches utilizing multidisciplinary methodologies. The problems here then, concern the conflict between fragmented assessments and total problem approaches or the context in which technology asSessment methodologies are applied. It must be realized that, in the past, those attempting assessments were concerned with a particular 76 problem of a utilized technology or a particular projected problem of a new technology, rather than the total problem context. This weakness has been compounded by a fragmented decisional arena and a dearth of reliable methodologies, characterized by what has been called "The Tyranny of Small Decisions,"1 or an abuse of incremental decision processes. To correct these contextual deficiencies, it must be realized that partial assessments, or "one shot" assessments, can be of value if the proper problem context is pursued, recognizing the imperfections of data and time constraints (see Chapter I, pages 65-67). If these deficiencies are recognized, corrected to the fullest extent and delimited by uncertainty boundaries, they will hopefully be more con- sistent with premises of the "adequate" assessment concept and the exigencies of the societal context. Methods Presently Utilized as Technolggy Assessment Some of the methods presently being utilized as partial impact statements are investigated in the following pages. These methods have been used in the past as primary means to measure costs against benefits in variable situa- tions concerning technologies, the environment, social processes and a myriad of other human inventions. The lack of success of these methodologies in part, is the reason for new broader attempts at social impact assessment. 77 Table 3 illustrates the types of methodologies being used in technology assessment attempts today. It can be generalized from this information that reliance has been placed on intuitive contributions or expert opinion and forecasting, predominantly subjective, non-quantifiable methodologies. The Peat, Marwick, and Mitchell survey also showed that the least reliance was placed on public partic- ipation and polling results. Cost-Benefit Analysis Cost-benefit analysis has long been a primary method by which economists and governments have measured the feasi- bility and probable costs and benefits of proposed projects or programs. "Cost-benefit analysis was developed as a technique to serve this very purpose with particular empha- sis on the evaluation of plans for a single sector. It was originally conceived during the 1930's and 1940's for the evaluation of alternative courses of action in the design of water resource projects and serves the single goal of economic efficiency. The goal was defined as the maximiza- tion of the net project contribution to the national income."2 During this lengthy period of use it has been noted many times that this approach has a multitude of attendant weaknesses, and alternative approaches of cost-effectiveness analysis have been proposed to validate choices made uti- lizing the method. Raleigh Barlowe observed some time ago 78 .hooHocosumE mco can» whoa mo om: wwuuommn mucmEmmmmmm avoHoqnomu umozm .00H .m .mbmH m:=b ..U.Q .coumaHsmmz .muvoa unmfimmommm mmoHocnowa mo hubuam < ..00 can HHm£OUHz .xOqumz .ummm "condom mmm pmuuommu mmHmOHocozqu Hmuoa .III .. .. .. uswEmHSmmmE HH 5 v can GOHumucwEHmexm m H .. m .. H GOHummHumm>cH UHon mGOHcho unmmxm was mm b w mH m vm wGOHuanHHucoo m>HuH3ch HH m H H m m cowummHoHuHmm UHHnsm MH H H N H m moHHmcdoHumwso mm b m m m «H mchHHsn Hmwoz .. mmstcnomu noummmmu om N v m mH mGOHumnmmo 0cm GOHHMHsfiHm S v o 3 A NN mfiummomuom moOHHums coHumsHm>w was mm m w m m NH .mmmuu mocm>meH .mwmnp cOHmHomQ 3 m a. me e 3 pagans... msmpmmm MUOHOUonumz mmHuHm mcoHu wuumsocH unmacuo>ow ucmecuw>ow mmoHowoaumz mo mama comm mo Inm>HcD IsuHumcH Hmooq Hmumcom 55m Hmuoa cam mumum momma monoHooonuwz .m mHnma 79 that this method has inherent weaknesses in measuring non-economic factors, "major emphasis should be given to further refinement of the techniques now used in measuring intangible and extra market project effects. These effects have a major bearing on the social worth of numerous proj- ects. Yet their values cannot be readily expressed in monetary terms. How much economic value should we assign to the provision of improved fishing or hunting Opportuni- ties? What is the benefit value of a scenic view or wilderness area? These factors should enter into the "3 Not only benefit-cost analysis more than they do now. does such a method encounter trouble allocating non-economic costs, it also has difficulty measuring secondary economic costs or indirect economic costs. When cost-benefit methodologies have been used to evaluate incipient or established technologies and the problems engendered by them, using the rubric "technology assessment," then serious questions can be raised as to the usefulness and validity of such studies. Numerous accounts of where cost-benefit analysis has failed to account for all costs or benefits are listed in such volumes as Thomas Detwyler, Man's Impact on Environment (1971),“ M. Taghi Farvar and John P. Milton, The Careless Technology.(l972)s or Arthur Maass, "Benefit-Cost Analysis: Its Relevance to Public Investment Decisions" (1966).6 80 Some examples of the types of costs and benefits that accrue to technological projects that have not been accounted for by cost-benefit methods are: <=_o.s_t§ 0 pollution, o unsuccessful Research and Development (extra costs distributed among profitable costs), 0 resource shifts-~unnecessary depletions. Benefits (often received, not paid for) o toll fee bridges and freeways (certain business interests benefit) 0 patent disclaimer (many benefit with no investment) 0 innovations capitalized on another's already developed and produced good or service. It has been observed that because of government regulation, increased public criticism and consequent fear of stricter controls, and even because of new and diverse social parameters for industrial management, the cost- benefit calculations made by the "technostructure" have tended in recent years to give greater weight to secondary and tertiary consequences of investment decisions and man- agement policies. However, in reality, "rarely has the social and legal context within which assessments are made fundamentally altered the relatively narrow frame of 81 reference for evaluation. With few exceptions, the central question asked of a technology is what it would do (or is doing) to the economic or institutional interests of those who are deciding whether or how to exploit it."7 The pervasiveness of this type of economic interest, is also evidenced in government decisions on projects; those that favor the economic gain from government sponsored tech- nologies are always those with the most well endowed lobby- ing effort and richest and influential constituents. When cost—benefit studies are used for technological decision making or labeled as technology assessments, several more limiting factors can be identified. Cost-benefit analysis is not a dynamic process; the decisions made are static statements of immediate benefits or costs. Few such studies projected costs and benefits over time, taking into account non-static parameters such as changing technologies, or economic parameters, not to mention cultural and social changes over time. The mathematics of cost-benefit analysis are more easily utilized if they are not cluttered or complicated with time frame calculations. An attempt to assess a field of science which is rapidly moving from the fundamental to the applied levels, namely oceanography, on the basis of a cost-benefit analysis, failed because it used an erroneous mathematical basis.8 The action options of this method are stated only in terms of a "go" or "no go" Option, or in a simple numerical 82 ratio of benefits to cost. The quality parameters of options is not delineated, nor are the options more numerous than the above examples. In addition, cost-benefit analysis has not been able to determine allocations Of investments among various public sectors or diverse technologies rep- resentative of those sectors. For example, this method could not choose between an innovative school program or new transportation technology Of the same cost. Cost-benefit analysis measurements and Options are stated only in monetary or market terms. If the social and cultural milieus and parameters do not operate according to the market principles necessary for cost-benefit theory and cannot be quantified into monetary terms, then the analysis explicitly ignores them. In many instances no realistic costs in monetary terms can be accrued to these factors; it would be very hard to assign a dollar figure to the cultural cost-benefit Of rural electrification in East Africa for instance. Others have Observed that the intricacies and sophistication Of cost-benefit analysis add to the diffi- culties of decision makers already faced with hard decisions on complicated technologies and technological programs, i.e., space exploration programs or pollution abatement technol- ogies. Don H. Overly presents a succinct picture of this dilemma, "the mathematics of benefit-cost analysis, however, 83 generally do not acknowledge the issue of selecting the appropriate benefits (beneficiaries) and costs (benefactors) for consideration. Formal policy: level consideration of a program's benefits and costs usually is made for the first time when proposals for budget support are made to the appropriate committee or some equivalent. However, elab— orate benefit-cost analyses, by the time they are presented to a corporate budget group, a regional industrial zoning board, a Congressional committee, or a regulating commission, are seldom in a form which permits brief and intelligent inquiry into the selection and quantifications of the bene- fits and costs used in the analysis."9 The future usefulness of such methods will neces- sarily depend upon the integration of these methods with those that are able to account for social and cultural factors and are relatable to goals of both assessors and the public. Specifically, the factors that cannot be quan- tified in terms Of the economic marketplace must be assessed by some other system. Environmental Impact Statements The requirements of the National Environmental Policy Act of 1969 have engendered a large number of environmental impact statements, which can be construed as the closest approach yet to the total impact assessments which will be required by well defined technology assessment 84 (see Chapter I). However, environmental impact statements have not proven to be as comprehensive as technology assessments need to be, given the very nature of their focus on the physical environment, without adequate detail given to economic, social and cultural spheres. «As assessments, the impact statements are far from ideal; taken as a new body of literature, they exhibit virtually no uniformity in terms of quality, scope or cost; some of them are merely Old data in new packages. Many of them tend towards the evaluation of the straight forward technology and direct dollar costs implicit in the various projects. That this situation is true, is in reality a direct contrast to the letter and intent of law, in that effects on the "human environment" are not fully assessed. The human environment is not confined to human interaction with the natural environment (depending upon the epistemology theory accepted). For example, Laurence H. Tribe has said that "technology assessment proceeds from the premise that much can usefully be done about particular areas of technological development and their indirect consequences without neces- sarily undertaking an examination of the entire body of contemporary technology. This differs from environmental protection because it takes human values and needs as para- mount and regards man's physical environment as an important medium through which his technology may affect his varied interests rather than an end in itself."10 85 It must be considered therefore, that the wording of the NEPA law is imperfect and ill defined and that serious deficiencies of available meaningful data exist. It is interesting to note that both EIS (Environmental Impact Statements) and technology assessments (as most experts construe it) receive their legislative mandate through the NEPA law, which being a very vague law, does not explicitly define either EIS or technology assessment, nor what should be contained in such "assessments" per se. Orlando Duloga has recently pointed out some explicit deficiencies of NEPA as it relates to the environ- ment [and to technology assessment also]. "NEPA does not raise the protection Of the environment to the status of constitutional rights/does not stop or preclude action/does not authorize courts to establish precedents for this desired action/does not allocate funds/and does not have procedural guidelines."11 Strangely enough even though NEPA does not authorize courts to set precedent, they have done so; largely due to the actions of a vociferous public more aware of increasing environmental degradation and a technological omnipresence. In the overall context, it has been left to the courts to decide what EIS should contain as a minimum. For example, in July of 1971, the Court of Appeals for the District of Columbia told the Atomic Energy Commission that it was 86 unable to reach decisions regarding project licenses because the AEC statement did not include sufficient consideration of environmental values.12 Given this set Of circumstances, court decisions on the adequacy of individual technology assessments according to precedent, vis-a-vis EIS, or by totally new interpretation of NEPA, would be necessary. In reality, perhaps this situation is not as foreboding as it appears. TO date, the courts have upheld the concept of a broadened base of participants and an increased base of relevant evaluative factors in the protection of the envi- ronment; the range and sophistication of the forums is increasing. Although some critics have pointed out that large numbers of projects are being held up in court because Of this litigation over the adequacy of the statements, Laurence Tribe believes that private litigation has several advantages in controlling technological developments, "these are of three principal sorts: (l) the enhancement of the sense of participation among the citizenry that accompanies such litigation; (2) the potential role of such litigation as a catalyst for change; and (3) its potential use as a focal point for the gathering, evaluation and dissemination Of new professional attitudes and new entrepreneurial assumptions with respect to the obligations that accompany the use of science and the development and application of technology."13 87 Methodologically and procedurally there are several Obvious weaknesses that inhibit the use of EIS for technol- Ogy assessment. One immediately notes that EIS methodology has proceeded from a lack of base line information. There exists no body of data for a before and after comparison of impacts. This situation makes forecasting difficult in that basic natural and social phenomena have not been iden- tified to the extent that reliable forecasts of impacts can be repeated for differing projects using the same methods. This relates to the need of basic research on the environ- ment and the social-cultural realm to identify basic struc- tural interactions or parameters. To date those performing EIS have tended to aggregate technology-society-cultural interactions under general headings and make subjective judgments as to rate, intensity and type of impact. See the procedure used in Luna B. Leopold gt_al,, "A Procedure for Evaluating Environmental Impact."1“ Another important weakness of EIS is a disagreement on standards for environmental quality; this disagreement is found among both politicians and scientists. the NEPA law has not addressed this problem nor has the Environmental Quality Act of 1970. The guidelines used in EIS studies, as of this date, have all been based on those promulgated by the Council on Environmental Quality; being very 88 generalized in nature they have not stressed the real need for Objective standards by which to judge the environmental quality, independent of visceral economic and political values. Later this year the Council on Environmental Quality will release expanded guidelines and comprehensive indicators of environmental parameters based on those iden- tified in the third annual report of the Council. These environmental parameters have been categorized as Underlying Factors, Resources, Ecological Factors, Pollution and Man- Made Environment.15 There is a dearth of national, state and regional policy toward the environment, environmentally related issues or technology. As it presently stands, neither EIS nor technology assessment can resolve policy decisions; they exist only to provide information for decision makers in a strictly political forum. It is not proposed that the decision making process be sublimated to scientific proc- esses, but rather that new policies and laws considering the issues Of EIS and technology assessment be instituted. The proposed National Land Use Law is an example Of the policy needed if EIS is to Operate even as a functional tool of decision making. EIS must also face charges against its credibility. Most statements are prepared within executive agencies with no public indication of money spent on the project, who 89 performed the assessment and what his competence was, and what constituted the study--scope, relevant factors, etc. The credibility Of the EIS process is not enhanced by numerous charges by individuals and governmental groups working with the federal government, that EIS exhibits unstandardized guidelines and bureaucratic red tape. This credibility is certainly in question if individual depart- ment procedures are examined; for example, the "Environmen- tal Clearance Worksheet" for the Department Of Housing and Urban Development consists of a two-page fill-in agenda covering such diverse items as environmental impact, A-9S review, alternatives, and views of local groups."16 This serves as an example of misrepresentation of the intent of the process and law; a half-hearted attempt to serve the law while doing as little as possible. One final criticism of EIS is that non-federal activities are not liable to review by the Council on Environmental Quality. A substantial number of the tech- nologies and other programs applied to the environment are at non-federal levels. This has in part been resolved by the adoption in several states of laws similar to the NEPA law. Given the very nature of these weaknesses it can be said that at present EIS does not even perform to the capac- ity expected by its own legislative mandate, not to mention the more stringent needs of technology assessment. 90 However, in summary, there are certain strenghts of EIS and NEPA that will provide an atmosphere for recognition of the many relevant factors in a social/environmental impact assessment. The strengths Of the NEPA are: l. The environmental impact statement process has actually brought to governmental thinking an action concern with the "quality of life" that previously was largely expressed in rhetoric. 2. The environmental impact statement process has also proved an effective way to accomplish planning across agency lines. 3. The EIS process affords the public an opportunity to participate in federal decisions that affect the environment. 4. The EIS process has forced many agencies to develop interdisciplinary staffs with a voice in policy and project planning. 5. Finally, NEPA is enforceable in the courts, which among American institutions may be the least sensi- tive to the influence of special interests.17 Technology Forecasting Technology assessment must inherently rely on the concept of technology forecasting or futures research in order to achieve the stated purposes of the total social impact assessments. Martin V. Jones suggests that from 91 some points of view technology assessment might be considered as a massive forecasting effort. ". . . In most technology assessments, the analyst has either to derive for himself, or Obtain from someone else, a forecast Of what will be the nature Of the technology being assessed as of some future date. This will require an identification Of where the technology currently stands, what further break- throughs and technical improvements are likely, and what will be the state of the art at the projected future date. Going through this process is in essence making a forecast."18 It is important that we make a distinction between "technological" forecasting and futures research; futures research is more closely attuned to the total impact assess- ment concept in that it considers future states of not only technology but also other elements of the social and cul- tural systems. Both futures research and its precursors (Operational research and systems analysis) generally in- volve the conceptual fabrication of an intellectual, analyt- ical, or physical model that resembles the performances of its real-life counterpart.” This then would necessitate a concept of "macro" forecasts in relation to technology assessments. An assessor would then have to make a forecast on presumption concerning: 1. supporting technologies, 2. competitive technologies: 92 3. state-of-society conditions, 4. resulting impacts that will occur as all of the relevant technologies and all of the societal attributes interact upon each other, 5. incremental impacts that would result if various action Options were implemented in an effort to maximize the anticipated good impacts and minimize the anticipated bad impacts of a projected technology.20 It should be realized that technology forecasting and futures research have serious inherent drawbacks in methodology and operations that pose limits on their sci- entific acceptance, yet the express need for such approaches in technology assessment clearly exists. Even if one were to confine himself to assessing historical or current impacts Of technology, he has to engage in a type of cause- effect analysis that for its major attributes must draw upon the same kinds of intuitive-statistical approaches as future- forecasting does. A recognition that these approaches are imputing some order to disciplines that have proven amenable to the scientific method is necessary; the techniques of futures research lack the precision and experimental valid- ity of the laws of natural science but substitute judgment and probability instead. 93 Depending upon the epistemology taken, the use of futures-forecasting allows society to define policies toward a large set of alternatives in the future, the openness of the system of alternatives depending upon the degree Of determinism approached. Theodore J. Gordon has addressed this point rather well, "futures research is a means of discovering and articulating the more important of the alternative futures and estimating the trajectory likely to be produced by contemplated policies. Thus forecasting is perceived as an aid to decision making in the present, and not as a means Of producing a list of chromium plated potential mousetraps."21 However, both decision makers and technology assess- ment analysts must work with the aforementioned inherent weaknesses of future-forecasting methodologies. The state of the art to accept Erich Jantsch's assurance that it is indeed more an art than a science, is crude both theoreti- cally and operationally. There is internal dissension among forecasters as to how many methods are existent for the purpose of fore- casting. Practitioners have claimed there are only two or four, others recognize as many as one hundred. Martin V. Jones recognizes only five core types: intuition, trend extrapolation, trend correlation, models (statistical), and analogy.22 On the other hand, Theodore J. Gordon would recognize genius forecasting, trend extrapolation, consensus 94 methods, simulation methods, cross-impact methods, scenarios, decision trees, and input-output matrices.23 This confusion as to relevant methodologies would present serious problems to the technology assessor in choosing a method for its preciseness or on recommendations by authorities. Probably more Often than not ease Of utility and familiarity would be deciding factors. Forecasters themselves have realized the weakness Of this series of conflicting taxonomies and have strived to overcome it by utilizing several methods at once, and by improving the raw data with which they work. There are specific faults with forecasting methods as an exercise in choice among alternatives, which all methodologies share. First, in the past, few forecasters have provided traceable records or documentation to support their forecasts. This is particularly true of those engaged in "genius" or "intuitive" forecasts. For example, in The_ Year 2000,2“ Kahn and Wiener take only six pages to make 135 predictions covering many diverse fields of technology with- out adequate discussion Of the methodologies employed. In many cases, if the forecast was not derived purely on an intuitive basis, it appears to have been based essentially on an extrapolation Of some current trend. Often two different authorities looking at the same sta- tistical and experience base will arrive at entirely different forecasts because they make their extrapolations 95 from different portions of the total historical base.25 This is a unique fault of such trend extrapolation fore- casting; no matter how sophisticated the methodology, trend forecasting has adopted a theory of historical events that presupposes that the present is but a point on a continuum and that discontinuities or abberations in the flow of events are rare. It is Often difficult to judge the accuracy of past forecasts, due to the vagueness Of the original forecast. Gordon notes "that many descriptions of events, in retro- spect, were not specific enough and defined trends rather than 'happenings.‘ Furthermore, the occurrence of highly specialized events is noted by specialists and may not be u 26 In systematically recorded or generally accessible. addition, Nancy Gamarra of the Legislative Research Service (now the Congressional Research Service), has recorded a long list of erroneous predictions and forecasts of tech- nological and social events made by experts.27 In reference to the large scope Of techology fore- casting, or futures forecasting, in toto, Erich Jantsch, who has identified a large number of possibilities for technological forecasting, has proposed that technology assessment is a subsystem of technology forecasting. "Technology assessment, a particular task of technological forecasting--or, more appropriately, systemic forecasting-- would belong to the strategic level."28 He further asserts 96 that technology assessment is inherently weak because of its ”lack Of normative guidelines and criteria to be applied to matters Of choice, such as alternative technologies."29 Thus in Jantsch's conception, forecasting is not a subli- mated method but a subsystem Of the process of rational creative action leading to innovation; it is a normative process analogous to planning and decision making. He proposes a normative systems approach utilizing forecasting, planning, or decision making to achieve the rational crea- tive action, with a norms-*policies->strategies-*tactics (Operations) hierarchy acting as a vertical integration method.30 Finally, T. J. Gordon recognizes three important caveats about forecasting the future of technologies. First, there is no way to state what the future will be. Regard- less Of the sophistication of the methods, all rely on judgment, not fact. Secondly, there will always be blind spots in fore- casts. If we try to guess what will happen in the future, we are likely to omit events for which there are no existing paradigms, events which seem trivial but through secondary or tertiary effects become important and events based on whim, chance, or unexpected coincidence. Thirdly, potential futures are posed to serve as a backdrop for policy making. If enacted, policies may be 97 expected to change the future. Therefore, the notion of accuracy involves some paradoxical considerations.31 In retrospect a most important and pervasive draw- back to forecasting is the difficulty of reconciling values in forecasting. The forecaster cannot know what the values of the future will be, yet in going about his job he neces- sarily makes value judgments utilizing essentially his own set of values, not really the larger publics'. These values are expressions of the present which might lead to more value inertia in society, if forecasting is widely used, or tyranny of present values. This constitutes a limitation that not only forecasters but technology assessors must face and resolve. Developing Methodglogies--Data Orientations and Requirements Both in utilizing the past methodologies for tech- nology assessment and in developing newer methodological techniques, assessors are faced with two fundamental prob- lems: First, assessors must resolve or realize difficulties with the data domains with which they are working, recogniz— ing when and where to Obtain "hard" data, and how to objec- tivize "soft" data. Secondly, in connection with the problem, assessors must continue extensive experimental research which will provide a factual data base for technology assessment. 98 In addressing the first problem Marvin J. Cetron draws a clear distinction between "soft" and "hard" data in relation to their use in technology assessments, "data required for the comparison involved in technology assess- ment may be labeled as hard or soft (or somewhere in between) depending upon the degree of universal acceptance of the manner in which the data was generated. Hard data would be data from established fields of the physical sciences or accepted economic indicators. Soft data would be data from some social indicators or data based totally on judgment. The more easily data can be demonstrated and qualified, the harder it may be considered to be. In a technology assess- ment methodology involving measurement and comparison of both types of data, the utmost caution Obviously must be used in assigning numerical values to the softer data."32 I have chosen to expand upon these two basic prob- lems by discussing two methodological inputs to assessment function, namely the use Of rational simulation and the development of social indicators. Both methodologies recog- nize the aforementioned problems and are in part responses to widely felt needs for answers to these problematic situations. The choice of these two methodologies is indicative of two substantial functions Of technology assessments: the determination of present states Of society and the natural 99 environments, and the requirement Of a monitoring system to recognize and measure changes in these states of being for both society and the natural environment over time. It will be perhaps most instructive to limit the discussion of simulation to that concerning the natural environment, even though full scale simulation of society are being attempted (see Forrester, World Dynamics (1972)).33 This is because the relationship of hard data and experimentation to tech- nology assessment is clearest at this juncture. Secondly, the discussion concerning societal indicators will better describe the need Of "objectivization" of soft data. The development Of rational simulations of the natural environments are basic subsystems Of the requirement Of experimental research in technology assessment. For instance, the rational simulation Of the natural environment in which technology assessment must engage would require the develOpment of parameters of performance, given certain physical changes in the components of the physical, chemical and biological systems. Parameters of performance must include detailed measurements of environmental indices, i.e., pollution, residues, number of species, etc., but simulations must be utilized where full scale (total environ- ment) experimentation (i.e., the implementation of a partic- ular technology) would have irreversible effects. In order that a simulative effort become a valid and valuable tool, a monitoring system would be required—-the 100 changes in the structural parts of the environment must be known. Although an environmental monitoring system really consists of an administrative or management system, a scientific system and a legal system, it is the measurement function of the scientific system that is Of immediate interest. It is this system that must provide in-depth information about the environment for the simulative effort of technology assessment. It is essentially a measurement function of the following criteria: 1. components of environmental quality: pollution, effects, resources. 2. taxonomy of measurement parameters: macro, meso, and micro (measurements) levels. 3. geographic subdivision and location. 4. time.3“ If an environmental simulation system integrating these measurement parameters of an environmental monitoring system is then available for technology assessors, then their predictive efforts and action Options will have more validity. The requirement Of ”hard" data concerning the environment would have been partially satisfied. Social Indicators A reliable set of social indicators will have to be developed on a multitude Of levels in order to relate nation- al and social goals (identified at several levels) to the 101 technology assessment process. The immediate connection between the two processes would be that changing social indicators would act as informational inputs into the social factors section of major impact categories. Other major impact categories are values and goals, environment, demog— 5 There is more raphy, economics, institutional factors.3 than a modicum of disagreement as to the taxonomy of both societal indicators and major impact areas for technology assessment. This in turn raises some important questions concerning the efficacy of social indicators in relation to quantifiable data and other matters. Confusion and diffi- culties surrounding social indicators include: lack Of agreement on an acceptable definition and methods of con- struction for social indicators, uncertainty as to whether indicators should include qualitative measures as well as quantitative, disagreement on the concept that indicators must be "normative," lack of understanding and agreement on the use to which indicators can be put, the question of the validity Of the indicators, and the enormity of the task Of providing indicators, and improving the quality of social statistics and reliability Of social science information.36 It can be inferred from all of this that the connec- tion between technology assessment and social indicators is very explicit. We need to apply quantitative standards that indicate objectively and comprehensively what the status Of a society is, in relation to the results Of technological 102 changes. However, it should not be inferred that qualitative measures are not needed, they are; it must be recognized however, that quality indicators are also a time oriented entity related to goals and standards (which are dynamic in nature). For instance, it could be proposed that standards are made up of uses, criteria of measurement, and implementation plans, all of which change historically according to values espoused by society. Thus, it can be further proposed that much of the confusion sur- rounding social interactions is due to the nature of their perplexing dynamism, especially in relation to what has been seen in the past as a thoroughly (and linearally) explainable and static technology independent of social constraints. Therefore, some have been loathe to connect the two concepts because they appeared to be in different time modes and thoroughly independent of each other. Despite the conceptual and practical difficulties in developing social indicators, the need for such indi- cators is being recognized by social scientists, planners and politicians alike. According to various authorities, there is agreement that social indicators should have the following characteristics: 1. Measure some aspect of life which is thought to be related to human well-being and satisfaction. 2. Provide time series that allow comparisons over an extended period and which permit one to grasp 103 long-term trends as well as unusually sharp fluctuations in rates. 3. Utilize statistics that can be disaggregated by relevant attributes of either the persons or the conditions measured (such as skin color or year Of construction), and by the contextual character- istics that surround the measure (such as region or city size). 4. Include widespread community participation in develOping indicators to insure that the indicators reflect what the community wants. 5. Match the needs of the decision and policy maker with data collected for development into indicators. 6. Describe an output measure (for example, statistics on the number of doctors, or policemen are not social indicators, whereas figures on health or crime rates could be).37 If it is clear that we must know what to measure and how the results are to be used, in order to use social indi- cators, then this relevant use in technology assessment would depend upon their reliability. For instance, social indicators such as these could provide several technology assessment methodologies a base upon which to develop scenarios of action and response. In the methodology espoused by Martin V. Jones, they would provide criteria for state-Of-society assumptions.38 104 Developing Mephodologies--Normative Processes Recently, several technology assessment methodologies have addressed the problem of working with normative goals in a planning context. These methodologies have proposed the development of normative frameworks for forecasting, planning and policy formation, realizing that their operation in the larger technology assessment context is actually one Of directed action-response. For instance, both of the method- ologies examined emphasize the production of action options or responses to a technological problem rather than the involved introspection Of other approaches to similar problems. The methodologies examined are proposals for technological planning through rational creative action by Jantsch and others, and the proposal that technology assessment be embodied in citizen groups, advocated by Mayo and Mottour. To begin, Erich Jantsch, most recently known for an in-depth study of technology forecasting, has built on the work of Hasan Ozbekhan39 to propose a "cybernetic process of rational creative action" which should be viewed on three levels linked by feedback interaction between them: 221: icies (what ought we to do?), strategies (what can we do?), and operation or tactics (what will happen, if we take a specific course of action?) (see Chapter II, pages 86-87).”° 105 Jantsch would thus place technology assessment in a long range planning framework, realizing however, that the cybernetic approach (essentially a feedback from human action on the environment) cannot be predicted with cer- tainty. This type of planning (normative) is not concerned about how to get from point A to point B-—or, only at the operational level, dealing with the short range--but about what would be a good point B to choose, which strategy would bring society there in a "good" way, and where social systems of human living would be moving in dependence of individual choice."1 Hasan Ozbekhan emphasizes that this type of planning for technology has come about as a reaction to the change in attitudes engendered by the infusion of what Ozbekhan labels "Western civilizations' pragmatic commitment to determinism in various forms,“2 with the capabilities and methods of modern science. This science-society relationship he terms the "Triumph of Technology” and explains its significance. "It means that in a technology-dominated age such as ours and as a result of the forces and attitudes that have brought about this dominance, 'can,‘ a conditional and neutral expression of feasibility, begins to be read as if it were written 'ought,‘ which is an ethical statement connoting an imperative. This feasibility, which is a strategic concept, is elevated into a normative concept, 106 with the result that whatever technological reality indicates we can do is taken as implying that we must do it. The strategy dictates its own goals. The action defines its own telos. Aims no longer guide invention; inventions reveal aims. Or in Marshall McLuhan's now- fashionable slogan, 'the medium is the message.'""3 In discussion the type Of planning for technology that Jantsch eSpouses, there are serious questions to be raised concerning philosophical problems of truth, rational- ity and optimality. These problems are derived from an aim Of normative planning which is to arrive at an Optimal plan or state. An excellent discussion of these subjects is afforded the reader by referring to Hasan Ozbekhan, "The Triumph of Technology 'Can' Implies 'Ought'" (1968),““ and Marx Wartofsky, "Telos and Technique: Models as Modes of Action" (1968).“5 The framework in which Jantsch proposes technology assessment to operate as a subsystem is comprised of several components. Jantsch suggests that technology planning as "integrative" planning cutting across social, economic, political, technological, psychological, anthropological, and other dimensions, will necessarily be placed in a system framework as shown in Figure 2. Jantsch notes that the n #6 "current logical order of the process of rational action of Figure 2 is to proceed from left to the right, and from 107 TON .m .thH .floflaoa 508»: 300m and En Hagan-00h Sauna-an BUB noon-59 . Axoaemou 360.35.... 95H: amazon: can": gfivufluu HO nonflawfimaonufl Eva-Hum .N Guam?— noHunnHudm-HO 5E3 cosm>oHflu :OHuud 9733.5 Husowuum A $33.08 A EH81: A annaogm _ 8 Hana: mafia—0‘ Ansonmummo. moonwaum . . «no on A :oHuOIusnEH. Obwumuumfi—Hfiut H» 08 mooudouox 00..”- «uuu H33 3303. we aging in nu . q ‘ n u o . , u . fl " . . . . F Hi k H — l A no 803 sonuHum mow-undo: L A HHHHHAH no sandman gm mpHuovmuoum HWMHHMHquHO uQOHuoasm .uounoHBOLIIRxsxz . arm .aau. nOHuH>Huu¢ usamouauum uo OHWMHMHum o a; — obHuoouHflnluwna _ _ . OHnHumom A o . l—, . - o . . é ‘ JG ‘ ( k i m— i _ I. gOfifin—Hnmn—H gamma gm ”Owl-ES mafia 5 adOHHOQ woostom . mo ~638qu .uwHom. _ gguumnuoH>uaum .8. 335 g. _Iw»n@ H38. .8. u . . Hun . . o . u ,, - I . . . _ . I if 1 soHHom 108 the top down. In this way, he emphasizes that "policies are normative expressions of future states of dynamic "“7 This would lead to an understanding and formu- system. lation of policies and institutions by recognizing the system structure explicitly. Jantsch notes that technology planning would incorporate this assumption and would be a function of both "Vertical Integration occurring because rational choice is only possible from a viewpoint at the next higher level of abstraction," and "Horizontal Inte- gration necessary because we are dealing with total system dynamics, not with the Optimization of subsystems.”e In describing the second component of integrative technology planning, Jantsch is more explicit. "It is {gig} ambivalence Of technology which forces us today to attempt control of the develOpment and application of tech- nology in an integrative way, taking into account the full scale of inter-relationships Of technology engineering with the other forms of engineering--with which it forms an “9 Jantsch then proceeds to expound indivisible system." upon this theme, he proposes that nature-man-society- technology system can be broken up into six bipolar sub- systems (Figure 3). It is by operating within these subsystems that Jantsch would use technology forecasting, planning and technology assessment. Specifically, he states: "Control over a specific system component can be 109 NATURE / / ‘ \ / \ / \ / \ \ / \ / \ (l)/ (4) \ (2) / \ / \ / \ / TECHNOLOGY ‘ / ’ (5) / I MAN ......................... SOCIETY (3) Figure 3. The Nature-Man-Society-Technology System Broken Up into Six Bipolar Subsystems. (Source: Erich Jantsch, Technological Planning and Social Futures (London: Cassell Associated Business Programs, 1972).) achieved only if we go to the next higher level of abstrac- tion and formulate our objectives at that higher level. We can satisfy this generally valid rule, particularly suited to our purposes, by looking at the outcomes of technology within the above bipolar subsystems. In other words, we look at the function technology performs in these subsystems and we become detached from technology in two important ways: (1) we are now free to consider different technologies con- tributing to these functions, and to compare the merits of these contributions--and in turn the merits of specific technologies in the context of such a bipolar-subsystem; and (2) we can now apply normative thinking to functions 110 of technology (needs, impacts, side effects, etc.) in sufficient transparency to bring our human value systems into the play."50 To digress to a theoretical orientation, it would be of value to view a notable variation between the views held by Jantsch and his colleagues utilizing technological forecasting and normative methodology, and other planners. Some other planners, environmentalists and anthropologists (i.e., Ian McHarg, Andrew Vayda, and Julian H. Steward), view the position of man and his culture as being an adap- tation to his environment. This is in effect a rejection Of completely normative approaches described as "the norma— tive concept, which views culture as a system of naturally reinforcing practices backed by a set Of attitudes and values, seems to regard all human behavior as so completely determined by culture that environmental adaptations have no effect. It considers that the entire pattern of tech- nology, land use, land tenure, and social features derive "51 Theodore J. Gordon observes that entirely from culture. the concept Of technology forecasting, an internal part of Jantsch's approach, is "antinihilistic and antideterminis- tic."52 Jantsch tends to sublimate the part natural systems play in the day to day development of technology to a minor role. Jantsch does not recognize what is beginning to be known as a vast array of natural limits to man's imposition 111 of material culture or nature (see Ian McHarg, Design with Nature (1969)).53 Jantsch proposes that "nature can play such a role (counteraction against technology) only locally and marginally. It could again become a major restrictive factor only after the population explosion has led to a catastrophical situation (for example, famine reducing the world population)."5“ A view contrary to Jantsch's is that held by environmentalists such as McHarg and Vayda, who would recognize a system whereby man is shaped by his reaction to natural systems and in both specific and overall contexts, limited by it. In terms of causality, man's culture is characterized by flow from nature to man and technology, to culture; in other words, man's interaction with his natural environment determines the pattern and course of his culture. This is essentially a cultural ecological approach. Julian H. Steward gives us the clearest description of this approach. "Cultural ecology differs from human and social ecology in seeking to explain the origin of particular cultural features and patterns which characterize different areas rather than to derive general principles applicable to any cultural-environmental situation. It differs from the relativistic and neoevolutionist conceptions of culture history in that it introduces the local environment as the 112 extracultural factor in the fruitless assumption that culture comes from culture."55 In addition, he notes the importance Of technologies and social adaptations. "The concept of cultural ecology, however, is less concerned with the origin and diffusion of technologies than with the fact that they may be used dif- ferently and entail different social arrangements in each environment. The environment is not only permissive or prohibitive with respect to these technologies, but special local features may require social adaptations which have far-reaching consequences."56 The importance Of this theoretical departure is not that it is simply an explanation of cultural history, but an orientation toward the consideration of natural boundaries when planning or assessing the technologically induced material and immaterial artifacts of man (again this sub- sumes cultural artifacts also). Another reason for this discussion of theoretical bases of normative and non-normative technology assessments and planning is that they are operative in a context defined by both normative and non-normative systems. In other words, human society can be theorized as either normative or non- normative, but nature is always non-normative and technology assessment must work within a realm that has no conceptual recognition Of human goals, only reactions to them, in a 113 physical, chemical and biological manner. Therefore, directed action responses of normative assessments could only be valid if a particular goal and norm was coexistence and preservation of natural systems. In reality, it would not be difficult to reconcile the planning efforts of Jantsch and Ian McHarg (as an example of environmental planners). It would simply be necessary for both to recognize that Operational limits exist for the man-nature interaction, just as Jantsch recognizes absolute limits of society-technology and man- technology subsystems.57 Which are in essence what might be termed the upper and lower limits of "adaptive technol- ogy." Another technology assessment process engendering a normative response and concomitantly, changes in all basic technology assessment methodologies, is the trend toward the inclusion of citizen input and participation. This trend assumes a concomitant movement toward pluralistic normative technology assessment processes. Citizen involvement in technology assessment assumes a broadening concept Of pluralism in planning and assessment efforts on many levels, i.e., the general citizenry, indus- try and government. That this trend is significant would necessarily reflect a change in assessment methodology. Numerous groups espousing many, often conflicting, values 114 make the identification of states of society and possible alternative futures a more complex task, requiring assessors to be more politically attuned to various societal sectors. In other words, more normative states will have to be reconciled in order to produce action Options. An example of a methodology affected and reordered would be that of Marvin V. Jones. In his step three--develop state Of society assumptions--(his entire methodology is presented in Appendix C), assessors must identify, define and measure the effects of a technology on a given classification of major state—Of-society attributes.5° If confronted with a pluralistic situation, the assessors would have to repeat I each of these steps for each group or let the groups perform the assessment themselves; the latter would be more valid in terms of perception Of real effects, but much less likely, given the present circumstances of the institutionalized assessment function. As stated, the character Of this technologically induced pluralism can be classified as to participants. First are the citizenry groups affected or concerned with technology and its effects. Examples of such groups are the various public interest research groups and the national citizens' lobby "Common Interest." Secondly, industrial sector groups are increasing as it becomes necessary to act as proponents of certain 115 technological innovations, Often in adversary roles against otherrmembers of the pluralistic society. Finally, the government is beginning to provide a structuring element for such pluralism, either as a neutral judge or advocate of the public interest. Some perceive this pluralistic effort and change in methodologic Orientation as a deepening perception on the part Of the individual in society as to the effects Of technology. Lewis Branscomb characterizes this perception as a fear--a fear of technology. On a personal level Branscomb Observes that people fear technology because: 1. technology seems to have too much momentum. 2. each member Of the public at large is a secondary party to every decision on the exploitation of technology. 3. our traditional legal mechanism for redressing civil wrongs are no longer so effective as they were when only two parties were involved (society is hard to sue; technology is progressing faster than court- set precedent). 4. the individual is frustrated by a world where the things he buys are too complicated for him to fix, where he does not know what performance he has a right to expect from his purchase, and it costs too much to have a repairman fix it.5’ 116 These apprehensions on the part of the citizenry as a response to the complexity and perplexity of modern technology have prompted Ellis Mottour to Observe that technology assessment is too important to be left to professional assessors or special interest groups. "Tech- nology assessment, regardless Of now recondite its details may be, must become an integral aspect of the nation's total social, political, economic decision making processes, in which all citizens have the Opportunity to participate. Otherwise, in a technology-permeated society, it will become increasingly difficult--if not impossible--to main- tain, much less enhance, the democratic character of our society and the quality Of freedom in our lives."“ Given that government and industry assessment processes have already been investigated, it would be of value to examine how a citizen envolvement process would operate, and its consequences for various methodologies. There are several inherent problems for assessment at the citizen level: finance, motivation and organization. There are methods by which citizen groups of any kind may be funded, the most traditional being private donation to the group itself. A similar means is for the group to receive grants of money and/or materials from other philanthropic agencies or the government. Ellis 'Mottour proposes a unique idea whereby a federal authority, 117 the Citizens Assessment Administration (CAA), would regulate and recognize citizen assessment associations (caas). He proposes that the caas be empowered to issue "assessment" bonds regulated by the CAA, in addition to the power to accept gifts and make contract agreements,61 a unique if somewhat ambiguous financing method. However, it must be realized that, with the traditional means of financing, most citizen groups would be severely limited in the types Of assessments undertaken as well as the number, given what appears to be prohibitively high costs for adequate assessments. Organizationally, citizens' groups are as diversi- fied as the technologies they might wish to assess. As mentioned earlier, examples are general interest groups (i.e., Common Cause,American Civil Liberties Union, and public interest research groups) which have diverse causes for motivation, special interest citizen groups (Sierra Club, Conservation Foundation, ad hoc groups interested in partic- ular technologies, etc.), and student led groups (with a diversity of motivations and interests). Some of these groups are well structured or organized having national memberships and regular staffs, but the great majority lack the consummate skills, time and money to make an effective assessment organization. Their strongest attribute is undoubtedly that their vieWpOints are the unsolicited 118 responses to a technologically based society. Unlike Mottour's proposals, the attitudes expressed are unstruc- tured by the government (which may have a bias dictated by organizational structure no matter how loosely defined) and are perhaps truer expressions Of man-technology interaction. Mottour's concept,on the other hand, structures a citizen response that perhaps would carry more weight with the decision makers responsible for the imposition of some technologies. Mottour's prOposed caas would be empowered to perform assessments, distribute the results to the public and decision making bodies, and perform other tasks. "They would have the extremely important power to institute legal, class action proceedings against any organization or indi- vidual within the society (including agencies of federal, state and local government), which were making use, or planning to make use, of technologies whose assessments indicated detrimental consequences to the persons or inter- ests of certain segments Of the public."62 Methodologically an important question would be the role of experts and the possibility of duplication of effort. Addressing the latter problem first, it should be seen that given the complexity of most technological impacts, the more discretely unique assessments become, adds to the possibil- ity that an adequate assessment would be done, given that all assessors or assessment groups are biased in some manner, which is necessarily reflected in the methodology. In the 119 matter of experts, bias is also noted, and concomitantly, experts Often overstep the bounds of their expertise and become involved in matters on which they are no more qualified to make judgments than anyone else. Finally, it can be noted that the reason citizen assessments are proposed as methodological contexts is that biases exist in experts and that technology assessments tend to Operate in an adversarial system which requires a multi- plicity of normative vieWpOints. The Opportunity exists for the expression of the normative viewpoints of industry and government, and given a semblance of democratic orientation left in American society, citizens too should have an available forum for a normative input. Explanations Of assessment procedures and admin- istration together with interpretations of methodologies should not be left to stand by themselves as statements of the problem at hand. The components of each should be synthesized and linked in an integrative manner into a structured view of planning. Such an imperative requires an adequate response. 120 Footnotes 1Alfred Kahn quoted in U.S. Congress. House. Committee on Science and Astronautics, "Technology: Processes of Assessment and Choice," Report Of the National Academy of Sciences, July 1969 (Washington, D.C.: Govern- ment Printing Office, 1969), p. 10. 2Morris Hill, "A Goals-Achievement Matrix for Evaluating Alternate Plans," Decision Making in Urban Plannin , ed. by Ira M. Robinson (Beverely Hills, Calif.: Sage Paglications, 1968). 3Raleigh Barlowe, Land Resource Economic: The Political Economy of Rural and Urban Land Use Resource Use (Englewood’Cliffs, N.J}: Prentice-Hall, 1958), pp. 483-492. ‘'Thomas Detwyler, Man's Impact on Environment (New York: McGraw-Hill, Inc., 1971). sM. Taghi Farvar and John P. Milton, The Careless Technology: Ecology and International Development Garden City, N.Y.: NaturaIiHistory Press, 1972). 6Arthur Maass, "Benefit-Cost Analyses: Its Rele- vance to Public Investment Decisions," Quarterly Journal of Economics, LXXX (May 1966). 7U.S. Congress, House, Committee on Science and Astronautics, Technology: Processes Ongssessment and Choice, Report of the NationaI’Academy of Sciences (Washington, D.C.: Government Printing Office, July 1969), p. 26. 8National Academy of Sciences-National Research Council, Committee on Oceanography, Economic Benefit from Qgeanographic Research (Publication 1228; Washington,BTC.: Government Prifiting Office, 1964). 9Don H. Overly, "Introducing Societal Indicators into Technology Assessment," The Methodolo of Technology Assessment, ed. by Marvin J. Cetron andBogo BartoCha (New York: Gordon and Breach, 1972), p. 65. l°Laurence H. Tribe, "Legal Frameworks for the Assessment and Control of Technology," Minerva, IX (April 1971), 243-255. 121 11Orlando Duloga, "The Emerging Law of Environmental Impact Statements: The Federal Role" (unpublished remarks Of a seminar at the Annual Conference of the American Society Of Planning Officials, Los Angeles, California, April 1973). 12Nina Laserson, "Technology Assessment at the Threshold," Innovation, XXVII (January 1972), 20. 13Laurence H. Tribe, "Towards a New Technological Ethic: The Role of Legal Liability," Impact Of Science on Society, XXI (July-September 1971), 215-222. l“Luna B. Leopold et al., "A Procedure for Evaluating Environmental Impact" (GeoIogical Survey Circular 645, Geological Survey, U.S. Department of the Interior, 1971). 15U.S. Council on Environmental Quality, Third Annual Report (Washington, D.C.: Government Printing Office, 1971), pp. -9. 16U.S. Department of Housing and Urban Development, Draft Environmental Clearance Worksheet (Appendix B-l; Washington, D.C.: Government Printing Office, 1972), pp. 1-2. 17Robert C. Stuart, "Interdisciplinary Conference on Environmental Impact Statement Recommends Better Research, Coordination and More Responsible Evaluation," Newsletter, American Institute of Planners, VIII, NO. 1 (January 1973), 4. 18Martin V. Jones, A Technology Assessment Methodol— ogy, Some Basic Propgsitions (Washington, D.C.: Mitre Corporation, June 19717, p. 121. 19Theodore J. Gordon, The Current Methods of Futures Research (Paper P-ll; Middletown, Conn. and Menlo Park, Calif.: The Institute for the Future, August 1971): P. 1. 20Jones, op. cit., pp. 121-122. 21 . Gordon, Op. c1t., p. 2. 22Jones, op. cit., p. 127. 23Gordon, Op. cit., p. 3. 2“H. Kahn and A. Wiener, The Year 2000 (New York: The MacMillan Co., 1967). 122 5Jones, Op. cit., p. 123. 26Gordon, Op. cit., p. 17. 7Nancy T. Gamarra, Erroneous Predictions and Nega— tive Comments Concerning Exploration, TerritoriaI"Expansion, Scientific afid TeChnological Development (Selected state- ments, Legislative—Reierence Service; Washington, D. C.: Library of Congress, May 1969). 8Erich Jantsch, Technological Planning and Social Futures (London: Cassell Associated Business Programs, 1972), pp. 215- 217. 9Ibid., p. 215. oIbid., pp. 13-16. 1Gordon, Op. cit., p. 3. 32Marvin J. Cetron and Donald N. Dick, "Measurement and Technology Assessment," The Methodology of Technology Assessment, ed. by Marvin J. Cetron and Bodo Bartocha (New Yorkif Gordon and Breach, 1972), p. 107. 3Jay W. Forrester, World Dynamics (Cambridge: M.I.T. Press, 1972). 3"William D. Rowe, "The Environment: A Systems Approach with Emphasis on Monitoring," The Methodology of Technology Assessment, ed. by Marvin J. Cetron and Bodo Bartocha (New York: Gordon and Breach, 1972), pp. 43-44. 35Jones, Op. cit., p. 67. 6U.S. Environmental Protection Agency, Quality of Life Indicators: A Review of State-Of-the-Art and Guide- lines Derived to Assist in DeveIoPing Environmental Indicators (Washington, D. C}: Government Printing Office, 1972). 371bid., pp. 39-41. 38Jones, Op. cit., pp. 52-95. 39Hasan Ozbekhan, "Toward a General Theory of Planning," Perspectives of Planning, ed. by Erich Jantsch; Organization for Economic Cooperation and Development, Paris, 1969; also "The Triumph of Technology: 'Can' Implies 'Ought,'" Planning for Diversity and Choice, ed. by Stanford Anderson (Cambridge: M.I.T. Press, i968). 123 I"’Jantsch, Technological Planning and Social Futures, p. 9. I”Ibid., p. 3. ”ZOzbekhan, "The Triumph of Technology: 'Can' Implies 'Ought,'" p. 209. I'E’Ibid” p. 210. I"'Ibid., pp. 219-231. I'5Marx Wartofsky, "Telos and Technique: Models as Modes of Action," Planning for Diversity and Choice, ed. by Stanford Anderson (Cambridge: M.I.T. Press, 1968) , pp. 259-274. I"5Jantsch, Technological Planning and Social Futures, p. 20. “71bid., p. 25. “'Ibid. “’Ibid., p. 27. 5°Ibid., pp. 29-30. 51Julian H. Steward, Theory of Culture Change: The Methodology Of Multilinear Evqution (Urbana, Ill.: Univer- sity of Illinois Press, 1967), p. 37. 52Gordon, Op. cit., p. 3. 53Ian L. McHarg, Design with Nature (Garden City: Doubleday/Natural History Press, Doubleday & Co., 1969). 5"Jantsch, Tgchnological Planning and Social Futures, p. 29. ssSteward, Op. cit., p. 36. 56Ibid., p. 38. Steward also states, "cultural ecology has Been described as a methodological tool for ascertaining how the adaptation of a culture to its environ- ment may entail certain changes. In a larger sense, the problem is to determine Whether similar adjustments occur in similar environments. Since in any given environment, culture may develop through a succession of very unlike periods, it is sometimes pointed out that environment, the 124 constant, Obviously has no relationship to cultural type. This difficulty disappears, however, if the level Of socio- cultural integration represented by each period is taken into account. Cultural types therefore, must be conceived as constellations Of core features which arise out of environmental adaptations and which represent similar levels Of integration, p. 42. 57Jantsch, Technological Planning and Social Futures, p. 33. 58Jones, op. cit., p. 57. 5’Lewis Branscomb, "Why People Fear Technology," The Futurist, December, 1971, p. 232. 60Ellis Mottour, "Technology Assessment and Citizen Action," Technology Assessment: Understandin the Social Consequences of Technological Appiications, e . by Raphael G. Kasper (New YOsz’iFréderick A. Praeger, 1972), p. 266. 611bid., pp. 270, 274. 62Ibid., p. 270. CHAPTER III TECHNOLOGY ASSESSMENT, PLANNING AND PUBLIC DECISION MAKING This chapter will concentrate on the relationships between planning and technology assessment. The examination will consist of three parts covering the traditional role of technology studies in planning, a comparison of technology assessment processes to selected planning processes, and an investigation of how the technology assessment process will be adapted to planning functions. Here "planning" and "technology assessment" will generally be referred to in their abstract sense rather than endeavors carried on at certain levels, except where specifically labeled. The overall purpose of this chapter is to arrive at a synthesis of ideas concerning the sometimes divergent concepts Of technology assessment and planning. While the first two chapters served as explanations of the concept and diverse methodologies of technology assessment, the third will derive integrative statements concerning technology assess- ment as a rational planning process to be used in several levels of decision making. 125 126 The Planning Endeavors, Traditional Modes Of‘fiefiaViOr and‘Technology Perhaps it would not be misleading to characterize the traditional attitude and role Of planning toward tech- nology and science as one of promoting the affiliation of technology and entrepreneur capitalism with progress and the public good through economic development (author's view). This is largely a refinement of attitudes held in the nine- teenth century, but mitigated by the intervention of the government to straighten out the depressions and peaks in the upward climb of economic betterment (see Chapter I, pages 13-19). Planning on every level continues to espouse the idea that technologically fueled economic change is progress and that all progress is good. A vivid example Of this attitude is presented in the following excerpt from a national report on technology and the economy. "There has been widespread public recognition of the deep influence of technology upon our way Of life. Everywhere there is speculation about the possibilities for human life, and much public attention is directed toward scientific and technical trends. The vast majority of people quite rightly have accepted technological change as beneficial. They realize that it has led to better working conditions by eliminating many, perhaps most, dirty menial and servile jobs; that it has made possible the shortening of working hours and the increase in leisure: 127 that it has provided a growing abundance of goods and a continuous flow of improved and new products; that it has provided new interests and new experiences for people, and this added to the zest for life."1 Many planning departments on a multitude Of levels continue to spend a large portion of available monies on economic or industrial promotion and attraction, especially on a state level.2 This continued expenditure of planning effort is in contradiction to the economically accepted premise that cause and effect relationships between science and the economy are not wholly simplistic. "Although a decade ago there was a simplistic notion of the relation between science and economic development, it is now gen- erally realized that while the two are connected in a general but important way, they are not particularly closely coupled--industry by industry, region by region, or even country by country."3 While expressing the need of technology induced economic betterment, few planners have conscientiously tried to apply technology to city problems directly, and even fewer have tried to assess its effects in either the eco- nomic application or direct application to perceived prob- lems. This applies to planners at the city, regional and state levels; but at the city level the confusion concerning technology's place is the greatest. "Discussion of 'tech- nology and the city' Often suffers from an intellectual 128 confusion motivated by political advantage. The literature abounds in claims and counterclaims by advocates Of various technological 'solutions' to the 'urban problem.’ It is a literature replete with the fads and fashions Of 'crisis' language . . . and with the recommendations of innumerable commissions, committees and task forces. Everyone agrees that there are problems in our cities and that technology has not been used effectively to deal with them; but there is little agreement about what the problems are or how "“ This, in essence, presents a technology might help. paradox, consciously or unconsciously recognized by planners, consisting Of an unrealistic interpretation of technology as an exogenous factor almost unworthy of investigation as a major variable in the planning process, yet promoting the attraction Of technology oriented industry and business. Others corroborate this idea. "Of course, only some problems of our cities are technological in origin or amen- able to technological solutions. In fact, most analysts of urban affairs discuss technology only incidentally, even when they do make Obeisance to the important role played by technology in the origin and development of cities."5 This argument leads to the perhaps not uncontro— versial contention that planning, until quite recently, has failed to develop or partake in methodologies that place technology in prOper perspective in various planning 129 processes. For example, until the past five to ten years, regional planners, when planning sewage disposal systems, planned only the physical sewage and watewater system p25 pg; factors Of environmental damage, alternate technologies, social costs and other inherent impacts were not considered. Even though these types Of factors were not really exogenous, but Operating parameters, planners did not consider them so. Interestingly enough the expanded interest of recent origin, in technology and technological impact paralleled the plan- nerfisrealization Of the other supposedly exogenous factors being important relevant factors in the development of rational, operationally valid plans.6 To explain this change in planning attitudes, goals and frameworks, one must be able to propose in some sense the colinearity or coterminous states of technology and planning in this sense-~the progress of science and tech- nology led to the desimplification of the planning process and the rise in uncertainty when such processes were applied. This flows from three conditions that are inherent in the use Of modern technology in the social situation: 1. increased capacity to control physical situations. 2. increased complexity of organizational systems to realize technical potential (1). 3. increased uncertainty which flows from (2) and (3) that makes for uncertain outcomes of such complex processes. 130 This has resulted in a response to increase planning efforts somehow to avoid the consequences of the unknown action.7 The present planning impetus for technology assess- ment and environmental impact legislation has been derived from the changes wrought by these three variables of tech- nology. Todd LaPorte reports succinctly, "as technological potential is recognized as a force changing political and social conditions, we can expect growing demands to be placed on the institutions that activate this potential-- demands that it be used to create conditions more meaningful to individual and community experience. At the same time, the past conditions supporting older definitions of polit- ical and social value no longer are nearly as strong as in the past. When social and economic conditions no longer support value orientations, we can expect priorities to change and older values to be displaced by ones speaking to present conditions."8 Underlying these recent legislative efforts, is what some planners feel is a choice concerning not whether to change, but what systems to change. This is engendered by the clash Of technology and social systems. LaPort notes, "it is a choice between maintaining our value of technology and changing our basic conceptions of social and political values; or maintaining social-political values and reducing our enthusiasm for technological solutions."’ 131 This places planners in all areas in a perplexing situation with respect to technology; planners act as pro- ponents on both sides of the above question and others say in reality that change is needed and natural in both realms. With respect to the technology assessment function, all three types Of value orientations could benefit from such a methodological approach. An important concept to recall is that although the complexity of planned situations has increased with a con- comitant uncertainty, a positive effect is that planners are beginning to deal with those factors once only considered extraneous or not considered at all. Comparison: Technolggy Assessment andiPlanning—Processes This section will deal with the similarities and dissimilarities of technology assessment and planning processes. The examination will not only cover each process as to purposes and response to goals, but will also investi- gate the components Of each process methodology. Given the limits of time, space and factual materials, the analysis will be limited to those factors common to both processes that relate to a decision making forum. Several such schema exist to illustrate this forum, but the following is indica- tive: problem-+analysis->action. Realizing of course that numerous models Of both planning and technology assessments 132 exist, this analysis will in turn examine primarily one model of each process (illustrated in Appendices A and B). These two models will be illustrative Of the elements of both generalized processes. It is suggested that reference be made to these diagrams while proceeding with the discus- sion on the following pages. Purposes It would be extremely difficult, if not presumptuous, to speak of the purposes of planning in discrete unidirec- tional terms. The same problem arises in the discussion Of technology assessment processes. There are, as any planner can verify, many levels Of purpose in planning--governmental, geographical, philosophical, organizationa, etc.; some con- ceptions of planning espouse a duplicity or numerous pur- poses which in essence, relate to the goals matrix upon which they rest, and is not altogether an unusual stance, but a common one. For instance, planning agencies exist to perform "planning" which is concerted action to achieve goals or rational intervention in the process of change; however, they also exist to perpetuate the planning ideal, to provide members with careers and numerous other more or less defined purposes. Many conceptions of planning purposes are necessar- ily constrained by the attempt to be comprehensive and orthogonal. Alan Altschuler notes of city planning "aside 133 from the logical and technical barrier to comprehensiveness, there are serious political barriers, consisting of contra- dictions between the most persuasive abstract justification Of general planning and perceptions by planners of policital reality."10 The technology assessment process, because it is a generalized process, suffers from the same biases and multiplicity Of purposes as does planning. Its purview is as broad as that of traditional planning; it tries to deal with participants, trends, alternative strategies and outcomes, social impacts, data from the natural world, etc. Both planning and technology assessment are perhaps charac- terized by a hierarchy of purposes and are amenable to criticism.when lower level purposes override those dictated by either professional stance, or scientific approaches, or something as elusive as the purpose Of the public interest. This leads then to the question of past levels Of attainment of purposes by both processes. For instance, probably the ultimate "purpose" of technology assessment is that it be consistent with the idea of a Total Impact Statement, a proposal of Louis H. Mayo.11 In relation to this purpose, Martin Jones reveals some of the reasons why it has not been attained in the past, and perhaps will not be attained yet in the future. "Reasons for restricted assessments are the parochial interests, the restricted 134 responsibilities, and the narrow vision Of organizations that sponsor some research studies. Few organizations have a truly cosmic mission or outlook. Even those who have, or profess to have a comprehensive outlook, will have different conceptions as to what 'comprehensive' is. Even when efforts are made to ascertain all possible impacts, some considera- tions are likely to get much greater time and thought than others."12 These same words could be echoed when speaking Of the deficiencies of planning processes in relation to the purpose of Obtaining and making "comprehensive" general plans. The rationality of performing comprehensive plans is often strained by attempting such statements in the face Of increasingly pluralistic situations. This is perhaps due to the conception held by some planners that to partake in comprehensive planning is to pursue Optimum states as a process purpose. If we rely, rightly or wrongly, on empir- ical evidence, this appears not to be the case. Thus, purposes of planning and technology assessment on many levels are seen to be synonymous or clearly related, espe- cially when considering "comprehensiveness" and the activa- tion Of "techniques" (planning and assessment) for the "public interest." 135 9221;» When discussing the purposes of both processes, goals will necessarily be discussed because they are an "a priori" part of modern planning and technology assess- ment. In the traditional planning process, goals are Often obscured by the functioning of the process as it exists, i.e., in the structured process the goal formulation phases Often follow data inventory and analysis. Even though processes can be ordered to place goal formulation phases ahead of analysis and data gathering, few planning agencies, as Alan Altschuler points out, actually strive to do so.13 It must be realized that in large part much of the difficulty stems from the confusion in planning, as well as technology assessment, over operation and non-operational goals. Alan Altschuler analyzes this situation quite well by citing a planning process that took place in Minneapolis. "Minneapolis planners themselves tried to Obtain approval for planning goals before developing their central area plan. They decided at the start that they needed a goal statement which would be both 'operational' and accept- able to all 'reasonable' citizens of the city. By 'opera— tional' they meant that progress toward the goal could be objectively measured and that the broad costs, both tangible and spiritual, Of striving toward it could be foreseen. Comprehensive goals, they judged, could not be operational. 136 Therefore, reasonable men could not pass on them "1“ A failing Of planning as has been pointed intelligently. out previously, is the failure of a partial goal approach, in that it assumes similar value groupings, not the plural- istic response that is a reality in American culture. An analogous, but somewhat less clear position, concerns whether technology and science in and of themselves have goals. Franklin Huddle believes that such processes do not possess goals. "Strictly speaking, there can be no such thing as 'scientific' or 'technological' goal. The word 'goal' implies that a process of evaluation, of value aesignment, has been applied. To call a goal scientific or technical merely signifies that scientific or technolog- ical means are required to render feasible a politically or "15 This is in large part coun- socially desirable outcome. ter to parts of normative theory and cultural ecological approaches (see Chapter II, pages 110-113). However, no matter what theoretical orientation taken, the problem of goal formulation is a particularly arduous one in technology assessment. First is the argument concerning the provision for the goals of the guiding agency. Just as in planning endeavors, assessments are constrained by the biases of those sponsoring the assessment. Secondly, the assessors' goals impinge upon the quality of the assessment task, i.e., the existence or 137 non-existence of scientific goals and the assessors' relationship to the goals of the process—-the regulation Of innovation. This is indeed a problematical situation given that little is known of innovative processes, the triggering mechanisms, etc. This in turn raises questions concerning the necessity of goals as normative standards. This would necessitate or involve arguments of the foundation of values as either subjective (scientific approach) or Objective (philosophic approach), a much too detailed examination to be attempted in these pages. However, in reality it might be posed that nearly all efforts at planning and assessment are normative--in that neither process can operate without goals, and goals are an inherent part of a normative system. Arguments as to the directions and types of normative plan- ning are the paramount focus in such a schema. In the final synthesis, perhaps that which will best operate for both planning and technology assessment, espe- cially if technology assessment is to be a part of planning efforts, is to view goals and goal formulation contextually, perhaps best described by Franklin P. Huddle when he writes, "experience suggests that the integrated outcome of all efforts toward all goals, and the social matrix on which these outcomes impinge, need to be held within an envelope shaped by two constraints: 138 1. Change is inherent in the humanistic philosophy, an inescapable outcome Of the application Of the scientific method, and an inherent property Of the natural environment irrespective Of the impacts of human culture. 2. Most if not all systematic, progressive departures in the man-environment relationship from a 'steady state' have predictably catastrophic ultimate consequences."16 This then is an organizational context based on an evolutionary stance where the advanced state is character- ized by complexity and stability. Perhaps it is difficult to draw the same parallel between social systems and this steady-state dynamic in nature. However, perhaps this framework could tend to organize goal structure given that man partially Operates as an existential being in natural systems. If it is then premised that the purpose of applied science and technology is to make the interaction of man and environment more tolerable17 (a dynamic in itself with in- creasing complexity Of social systems), it follows that this framework could be a useful method for the consideration of goals in both planning and technology assessment. 139 Process Components--A Comparative Overview This section will attempt to deal with some major components of planning and technology by proposing a check- list or matrix made up Of components of each process. This approach is proposed as a method for determining the support Of one element of each process for several more of the other process and vice versa, in a one to one correspondence. The examination will also investigate the significance of multi-, trans-, or interdisciplinary approaches to each process and the compatibility of each of the processes in these ap- proaches. Also included will be a comparative overview of the processes in connection to the feasibility Of implemen- tation. Matrix analysis.--Perhaps a partial synthesis of planning and technology assessment can be reached if a matrix approach is used to determine the support of each process for the other. The matrix proposed would be one where planning methods are correlated to technology assess- ment methods, or if necessary, generalized components could be cross correlated. The matrix would hopefully give some indication of "fit" or "synthesis" based on the following measures or criteria: 1. Relative Costs--a determination of financial feasibility given current levels of funding for that type of planning or technology assessment effort. 140 2. Time Frames——usually determined according to the desires Of the decisional body; here parameters Of completeness or comprehensiveness are often expedient. 3. Degree of Intelligibility-—a measure of compre- hensibility (i.e., in the case of planners the degree of intelligibility impinges on the planner's ability to state concepts in language acceptable and tolerable to decisional bodies). 4. Availability of Competent Staff--manpower resources available to carry out each process. 5. Degree of Compatibility--Of the basic premises Of each process or method. (Source: author's inter- pretation.) This is, in essence, an example of a cross support matrix, a method, although judgmental, that serves to struc- ture thinking about complex interactions among processes. A Cross Support matrix is used to determine the support effect of each item Of a field on all other items. It is used to clarify complex relationships. For example, if item A is developed, what will be the support effect on items B, C, and D? It is used to define the extent of the support interrelationship; the resulting information can serve to rank order each item from the point of view of cross-support. The cross-correlation is displayed as a matrix. It is a 141 square array with the item-to-item effect described by the matrix elements. Generation of the matrix elements is accomplished by soliciting subjective judgments. Hard data generally does not exist for such relationships.1° Given the subjectivity of this method a full array of interrelationships was not pursued. The author of this thesis does not consider himself expert enough to make judgmental decisions regarding the fit between the two processes--such decisions would undoubtedly have little validity. Beyond this the matrix approach requires in itself the use of numerous experts and repeated trials in order to generate for the analysis plausible information that can be used to make decisions. However, from the research carried out in preparation for this thesis some generalized statements can be drawn concerning the compatibility of technology assessment to planning according to the criteria of the proposed matrix. Such generalizations would of course require further inves- tigation by generation of a full matrix, or investigation by other means. , 1. Relative Costs. It appears that the cost of an adequate full technology assessment would preclude the use Of such a process in many planning situations. For instance, Joseph F. Coates, Of the National Science Foun- dation, reports that a full technology assessment without experimental work or generation of new data would cost 142 approximately $150,000 to $250,000, which is equivalent to four to six senior man-years of effort." This cost would hamper assessment efforts in most localities and in some regions. However, some full-scale planning efforts in larger jurisdictional areas have cost this much, and there- fore, such cost figures would not immediately preclude assessment tasks of this order. 2. Time Frames. It appears that technology assessment studies of varying types would mesh well with planning efforts concerned with medium to long range plan- ning. The time frames for technology assessments vary from several months to several years, and there are existent proposals for continuous technological monitoring. These endeavors are analogous to the time frames used in many planning agencies. 3. Degree Of Intelligibility. The interface between planning and assessment at this juncture is probably difficult to identify concretely. Although it may generally be regarded that planners often do not understand the theo- retical bases or intracacies Of much of applied science or technology, and that conversely, scientists or technologists do not understand planning within the political and plural- istic realm Of modern society, it can also be proposed that perhaps neither group understands the systemic structure Of culture and society in which both processes Operate. For 143 instance, planning and technology are applied in complex systems and Jay W. Forrester says "complex systems are counterintuitive," that is they give indications that suggest corrective action which will Often be ineffective or even adverse in its results.20 Many of the principles underlying planning and technology are linear in scope and depend upon cause and effect relationships that are readily measurable. Forrester, on the other hand, characterizes complex systems thusly, "in complex systems cause and effect are Often not closely related in either time or space. The structure of a complex system is not a simple feedback loop where one system state dominates the behavior. The complex system has a multiplicity of interacting feedback loops."21 Thus it might be inferred from this complexity of systems that more technical expertise is needed in planning and that technologists ought to be more aware of the social milieu in which their product is applied, a clichéd but nevertheless a necessary change in rational systems. In reality it appears that a planning group can much more easily define a techni— cal process to a decisional body than a social effect or goals. However, an exact purpose of technology assessment is to structure these social effects and relate them to technological decisions, so it will necessarily have to be given serious consideration by rational planners. 144 4. Availabilipy of Competent Staff and Degree of Compatibilipy both relate to the foregoing discussion and naturally depend on that argument, but it can be generally said that few planning bodies have the staff capable of performing adequate assessments, given its broad multi- disciplinary staffing requirements. In addition, compat- ibility refers to measures of goals, purposes and outcomes which have been discussed earlier and have been seen to correlate quite well. Significance of Multidisciplinarity For sometime now planners, policy makers and other humanistically motivated persons have called for planning studies and studies of technology. Many such efforts have been mounted, but Often little progress is made in solving highly complex problems. This is in reality a difficulty of multidisciplinary studies; such studies are a method utilizing assembled experts with the premise that each knows best the parameters of the problem that relate to his partic- ular field. Coordination and cooperation are Often absent and this is reflected in the final report or synthesis which may Often be a collection of viewpoints of the same problem with conflicting conclusions. This is true of team planning endeavors, and will probably be true of technology assess— ment projects; science is much less prone to these charges given its unidisciplinary approach to well defined problems. 145 On the other hand, planning and assessment bodies (teams) must have an interdisciplinary capability; that is, an ability to work on problems for which there is no well defined body of knowledge. Approaching an interdisciplinary requirement is far more difficult than meeting the multi- disciplinary requirement, because it is easier to identify and assemble specialists than it is to identify and assemble people who can effectively work in the areas not covered by specialists. Erich Jantsch believes that interdisciplinary approaches in planning have to be viewed in a purposeful science/innovative system with interdisciplinarity under- stood as a teleological and normative concept.22 This is Jantsch's humanistic approach to rational action. He pro- poses an even more coordinated approach, one of transdis- ciplinarity, "with transdisciplinarity, the whole science/ innovation system would be coordinated as a multiechelon (multilevel, multigoal) system, embracing a multitude of coordinated interdisciplinary two-level systems, which, of course, will be modified in the transdisciplinary framework."23 Given recent studies that suggest that the limit of “ it would analysis and action depend upon the problem type,2 behoove planners and technology assessors to carefully structure interdisciplinary effects to both define problems 146 and pursue various levels of solution, while realizing that multidisciplinary efforts will probably continue to be the dominant mode of analysis and interface, and will perhaps be necessary. Feasibility of Implementation In the past a combination Of traditional attitudes, professional and governmental biases and political expedi-. ency have mitigated against the acceptance and implementa- tion of both radical and innovative ideas or changes emanating from planning and technology assessment. Some would argue that this has not been the case with purely technical/scientific fields (engineering, pure research—-hard sciences), thus the need for technology assessment (see Chapter I). On the other hand, the results and recommendations Of planning and other social sciences have Often gone unheeded and Often rejected completely, largely as a result Of political forces dominant at the time of their completion or inception. For instance, while John McHale notes the narrowing intervals between scientific discovery, technological development and large scale usage,25 Karl Deutsch has Observed that "as a practical rule of thumb it may be safer, . . . to expect the first major impact of social science advance to be delayed by ten to fifteen years after its inception."26 If this situation was consistently true, the imple- mentation of new innovative planning measures and certainly 147 technology assessment would be mitigated against, however in point of fact, the NEPA law and the Technology Assessment Act Of 1972 have provided for new innovative considerations of the social effects of technology. Historically, legislation of this type has been a double edged sword; where the law has been passed and acts as mandate and unifying principle, the actual use of the law may be somewhat different than its intended purpose. For instance, planning legislation of several types, state enabling legislation, and national legislation have all had varying effects on the manner in which planners perform their jobs. Given this variability, many court cases have been necessary in order to set the precedents for consistent planning efforts, and this is by no means to say that all planning efforts are consistent beyond a bare minimum. If this then is the legal atmosphere in which technology assess- ment must operate, then it too will be under the same pres- sures and will have to undergo the same types of adversarial encounters. As an example of the atmosphere that Objective planning and technology assessment would have to Operate in, Harold P. Green presents the following situation. "If therefore, the legislature is expected to implement tech- nology assessments, such expectation implies a willingness to have fundamental public policy questions resolved, at 148 least partially, by the elite assessment group rather than in the rough and tumble of the political arena. Acceptance of such a situation is, of course, not consistent with democratic principles since it would significantly deprive the public of an Opportunity to translate its own views as to benefits and acceptable risks, and its concerns, hopes, "27 Mr. Green and fears into effective political action. Obviously overlooks possible citizen involvement forums for technology assessment and somehow does not realize that most of the political decisions made in America today are really elitist oriented in some way or manner. Perhaps a most feasible manner for the implementa- tion of technology assessment would be for such processes to be allied with planning endeavors, given the greater Opportunity for citizen participation and the growing acceptability Of planning as a rational governmental task. Technology Assessment Processes--Tools fOr’Rational PIanning This segment will concentrate on the adaptation of technology assessments to planning processes. The stress will be on the use Of technology assessment processes to enhance the validity of plans and the accountability of planning groups. Not all planning endeavors will be viewed, but the selected types dealing with information systems and adversary and advocacy processes will be investigated. This 149 will serve to present two major views of the role Of technology assessment and planning-~those of politicized and non-politicized agents Of change. Information Systems Technology assessment may be able to operate as an information system for planning efforts, providing a variety of information concerning the interaction of technology, technological systems and supporting systems. This infor- mation can take as a frame of reference a system similar to that proposed by Erich Jantsch, of three bipolar subsystems of technology: technology-nature, technology-man, and technology-society.28 Or such information concerning technological consequences could be realized using an ontological or real systems approach or framework.2’ The information gained from adequate assessments that consider all of these interactions may be either action-Option ori- ented in themselves, or dependent upon decisional bodies for choosing appropriate action options. A distinction can be made here as to the Operational status of the assessment information system. It can provide information as a result of separate distinct studies for- mulated to deal with specific problems, or it can provide information from a monitoring or continuous assessment function, the latter being an ultimate rather than immediate methodological approach. 150 The use of an assessment system could be indeed unique and useful for planning in that in some aspects it could combine both environmental impact and social analysis (i.e., social indicators). There are however two serious questions which would impinge upon the ability of technology assessment to be a rational tool for planning: the content of the assessment (as measured by some type of adequacy standard) and the destination or use of the information generated. Standards of adequacy are established, consciously or unconsciously, when technology assessments are used as informational tools, but just as with other planning methods can only be validated when given a more universal acceptance by legal mandate. Perhaps a standard of adequacy similar to that presented in Chapter I could be utilized as a theoret- ical stance upon which to base well defined data parameters or checklists, Martin V. Jones presents several good exam- ples (lists) Of necessary factors.3° Another important factor, mentioned earlier, is the large amount Of research, both conceptual and empirical, necessary to identify the types of information needed to supply a technology assessment-information system. This is a result of the inadequacy of our present knowledge concerning technology-society systems. Secondly, the destination of the information gener— ated is highly important. Much Of the information existent 151 today concerning technologies is in private hands or under the aegis of select decision makers. In order that this technological information, and that generated by assessments be Of maximum benefit, it needs to be disseminated to larger forums. Hence, the proposals that it be a formalized plan— ning function, given a historical tenet in planning that encourages and requires by law the publication of plans and plan data. Realizing the inherent shortcomings Of this information function Of planning, technology assessment information systems formalized in planning functions would be ppg_of the £592 valuable forums for technology-related information. Adversary and Advocacy Processes The generalized subject matter addressed here con- cerns technology assessment and adversary and advocacy processes in citizen responsive planning endeavors. The examination will strive to answer the question, DO adversarial/advocacy processes in technology assessment parallel those of planning? A synthesis Of ideas on how the two approaches might be integrated is proposed. If one agrees that technology assessment and plan- ning are both in fact and abstract fashion authoritative decisional arenas, then advocacy can be understood to have as its objective the presentation of claims or demands that the decision or outcome allocate values, i.e., rights and duties, benefits and costs in designated ways.31 152 Louis Mayo presents essentially what amounts to a classic justification for adversary and advocacy roles in planning enterprises, both decisional and non-decisional, which he extends to technology assessment. "Advocacy in the sense of attempting to influence outcomes is also employed as a strategy in assessment forums. While the assessment process culminates in an informational outcome as contrasted with a binding value allocation, it neverthe- less involves a decision or determination as to the outcome which distinguishes such processes from a mere 'bull ses- sion.‘ Advocacy in the assessment forum is directed toward gaining recognition for certain types Of effects Of a tech- nological application and toward persuading the assessment entity to apply evaluative criteria to such effects (socially desirable or undesirable and the magnitude thereof) so as to reflect the participants' preferences."32 Similar arguments are used to challenge the view that experts know more or understand the issues better in planning (see Chapter I, pages 56-57); when in reality plans, in order to be more rational, must somehow make provision for the viewpoints of affected parties (effects of planning or technologies). Given that some planning forums have been hesitant to provide for direct informational inputs from the affected groups, then adversarial and advocacy processes become rational choices. A primary reason for the exist- ence Of this situation flows from the "political" nature Of 153 both planning and technology assessment, realizing of course that both processes have "scientific" parameters. In short, "scientific truth is established by Objective demonstration and confirmed by replication; political truth is established by consensual agreement, usually after an 'advocacy con- test.'"33 That such similarities exist between planning and assessment, enhances the proposal that perhaps technology assessment could Operate in the traditional planning adver- sary forums. These forums that have existed for planning issues have been both formally organized by planning deci- sion makers and also have existed as informally organized ad hoc or citizens' groups. This in turn raises the question of how technology assessment might Operate in adversarial systems engendered by planning. It might immediately be proposed that tech- nology assessment would find operating in planning adver- sarial systems difficult because in the majority of cases adversarial forums that Operate in conjunction with planning are not formally organized or recognized by law (other than the strictly legal term of citizen litigant). Some laws that provide for citizen input or for information from groups other than planners are those that provide for public hearings on goals and the final plan; yet they have not <3perated to structure any kind of system.where the basic 154 tenets Of the planners would be questioned. It would be doubtful that a real adversary process thus exists. Except Of course through the medium of the courts, which have their weaknesses in relation to information, "the adversary system, in sum, is based on two premises: first, that lawyers and judges are competent in the matters dealt with, and second, that the system can provide enough of the right type of data to make viable decisions."3“ Perhaps both assumptions are incorrect in regards to courts as they are now constituted. In this regard, numerous cases could probably be cited where due to inadequate information and the inadequacies of the court based adversary system, erroneous decisions were made with reference to planning and technology. Thus, it would appear, that in order to both pre- clude the use Of the courts as the only recourse to those seeking a more pluralistic base to planning and technology assessment, and to provide a new forum for the extension Of relevant factors to be adjudicated, that new laws recogniz- ing new orders of adversarial forms are necessary. In other words, adversarial forums have forms other than those cen- tered around courts and lawyers, and these ought to be provided for in technology assessment and planning. The necessity of new laws for structuring adver- sarial responses in planning and technology assessment is 155 more clearly seen if it is realized that non-formal adversarial processes and groups will not take up technology assessment as a plausible cause in many cases. This is based on the premise that very complex and unintelligible planning and assessment projects will not become issues worthy of advocacy until simplified by crisis or immediate impact or become a controversy engendered through "an emerg- ing tradition Of social criticism evolving in response to the scientific-technological revolution."35 In support of an adversarial "approach" to the increasing amount of uncertainty and divergent social values (see Chapter III, page 129), Mayo realizes some of the in- adequacies of adversarial systems and proposes a new outlook. "The Obvious abuses of the adversarial system in practice such as concealment of relevant information, introduction of frivolous claims, the distortion of factual data to suit partisan ends, the exaggeration of benefits or of potential dangers, the divisive efforts which prevent consensus on matters where potential and legitimate consensus would serve the public interest, and so forth, should not blind us to the contributions such a system can make in support Of more adequate technology assessments."36 Mayo then proposes that such adversarial systems depend on standards for assessment [it might be proposed that complex planning issues also involve similar standards 156 of "adequacy"], "if one begins with that criterion of the Adequacy Model [presented in footnote 30, Chapter III, page 166] which refers to the comprehensiveness and open- ness Of assessment information, then the adversarial system as a method of inquiry is to be encouraged rather than inhibited."37 One proposal serves as an example of how planning would function to structure adversarial responses in tech- nology assessment. Some, Mayo and Green,38 have prOposed that new institutions for scientific judgment be initiated to serve as science policy and technology assessment review boards. A function of planning, either on the state or regional level, would be to carry out some of the assess— ments, organize the information for the remainder and pre- sent it to the review board. It might be noted that state planning if reordered and given new powers, also might serve as such an adversarial forum. The responsibility of the board would be tO provide a well publicized forum for both the controversial and non-controversial assessments and technologically based plans, and also to provide funds and centralized data sources for adversarial groups to make their own assessments. Such groups as Mottour's "citizen assessment association" (previewed in Chapter II, pages 113-118) could operate in conjunction with such a mechanism. Laurence Tribe has certainly envisioned an analogous institution or situation when he proposes increased numbers 157 Of forums sanctioned by law, "technology assessment, furthermore, need not rest on centralized planning; it could rely on a system of pluralistic decision making in which the role of centralized decision making would be to design a social environment in which the various responsible decision makers could arrive at better solutions. . . . The role of law in technology assessment is not merely to impose precise constraints, but to elicit a rich pattern of affirms ative responses."39 Levels Of Endeavor--Geographical and Jurisdictional Considerations To sum up the attempt at synthesis or integration it can be offered that technology assessment can become a part of rational planning either as a tool of the process or as separate process functioning in conjunction and coop- eration with planning to achieve common goals and solve common problems. It has been proposed that methodologies of each process conform to the others' standards quite well and that by the integration of the two processes some of the functions of both could be performed with greater accuracy and completeness. Up to this point most Of the discussions have used - the term planning in an abstract manner, referring to the generalized process rather than planning in certain geograph- ical and jurisdictional boundaries. However, questions 158 relating to the Operation of technology assessment and certain planning levels need to be raised. It is a main contention here that technology assessment as a part of planning efforts can take place at various levels, but that for various reasons efforts at regional, state and national levels will be more rewarding and operationally more valid in terms of adequacy criteria stressing completeness and total measurement of effects. It would not be unwise perhaps to purport that recent history has shown that technologies are applied to areas larger than the traditional city or town. This is a result of the industrial revolution, the transportation and communication revolutions and the need for capitalistic oriented enterprise to expand markets and customers. It should then surprise very few that the assessment of the effects of technology and any consequent planning for tech- nology should take place at a regional or higher level. This last point is not particularly hard to under- stand if the nature of technological impact is reviewed. First, it should be noted that technological impacts d2 occur on the local (city, town, particular rural spot) level as single occurrences, but are usually not of the ggglg that would merit either social criticism or crisis. It is only when such effects are aggregated or identified on regional or larger levels that impacts become politically and perhaps 159 scientifically visible. An exception to this line of thought would be for instance an oil spill in a particular location or the imposition of an atomic energy plant in a particular locality. The reasoning for this premise flows from a recog- nition between technological impacts that are either "acute" or "chronic," an analogy to human illness is inferred. The former refers to the particular situation of one point in time, i.e., the Oil spill, which does not account for the majority of technological impacts. The latter "chronic" impact denotes the cumulative and synergistic aspects of the majority of technological impacts; these are apparent and measured over a longer time span than acute effects, hence the term cumulative. Given the wide dispersion of modern technologies, cumulative effects are best measured at regional or higher levels. This is not to overlook the fact that with the rapidity of the dispersion of technol- ogies, the "acute" and "cumulative" effects might merge or be synonymous (i.e., effects of new nationally distributed consumer products, new communication technologies, or new social technologies promulgated by national statute would have immediate as well as cumulative effects). Other reasons can be advanced for performing assess- ments on a regional level. An immediate cogent reason is the economic functioning of technologies on a regional scale. 160 This engenders both a government sponsorship Of technological research and development on a regional scale and the distri- bution by private industry of technologies on regional basis. Private industry has long used regional homogenous economic areas, economic subregion and state economic areas in which to test and pre-market new products and technologies."1 In addition, private industry has chosen to centralize Offices on a regional basis, maintain statistics and records on a regional basis and in general, to coordinate aggregated mar- kets on a regional basis. If these factors are considered with the avowed government policy to develOp technological research and development on a regional as well as state basis, then assessment and planning of technologies on a regional basis becomes not only plausible, but necessary. If there are political, economic and theoretical reasons for assessment and impact measurement on a regional _scale, what is the basis in planning principle and practice for the integration of the processes on a regional level? An immediate and ever present reason for such inte- gration at a regional level is derived from the present status of planning and technology information. Much of the information of this type, certainly for planning and probably for technology, is not available or is inaccurate for levels below that of region, S.M.S.A. or state. Raymond Bauer notes that much of the planning and 161 technology data is in the form of unstructured statistics which is deficient in "many things having to do with tech- nology, for example, are very badly represented, and the higher abstraction of quality of life are even more poorly represented. While this should not be so, it is no surprise either.”1 It can be further premised that much Of the data needed for technology assessment and planning cannot be gathered in censal form, but must rely on sampling tech- ,niques or individual data searches (per each assessment) which can be very expensive. Again, Bauer points out, "with the inauguration Of sample surveys for gathering many of these data, the samples aren't large enough. . . . It is a matter Of cost and a matter of how much muscle can be put in- to [studies] by the people who want those statistics. . . . In principle, anything that can be gathered on a national basis can be gathered on any small unit, but it just costs a lot.“2 Several other factors weigh in the favor of technol— ogy assessment-—planning functions at the regional or higher level. First, technology assessment being a relatively new "discipline" few planners and technologists have been trained in its use; those that have been trained in envi- ronmental and technology studies (including assessment) tend to gravitate toward higher salaried and more challenging jobs at regional and higher levels. 162 Secondly, the costs Of an adequate technology assessment using expert personnel are quite high, as men— tioned earlier, usually much more than a local planning agency would afford. Vary Taylor Coates estimates that an average technology assessment would require six man-years of effort at 520,000-530,000 per man-year.“3 Given some of the uncertainties surrounding the financing of local planning, few agencies at this level could mount such an effort. As stressed, the integration of technology assess- ment information and impact statements at a regional level is both an administrative and methodological necessity, however the several benefits of certain assessment functions at the local level should not be overlooked. The merits of assessment Operations at the local level are embodied in two major components of the process-- the geographic component and the data measurement component. These, in turn, give rise to a major reason for such func- tions at the local level--the-determination of the validity of the assessment statement. For instance, the local level planning bodies should function in an analysis and feed-back capacity in order to lend validity to regional impact assessments carried out at the local level. In other words, local planning bodies could serve as quality checks on the regional assessment 163 statement concerning correct empirical data, viable political reaction, and responsible citizen input. Because technology assessment impact measurements are necessary at a multitude Of geographic levels, coordi- nation and involvement Of local planning entities would appear to be a prerequisite part of any scheme to operate a technology assessment function on a regional and state level. Given the broad scope and intentions Of technology assessment, it would be wise not only to coordinate assess- ment information at the local level, but also provide an administrative and operational structure for environmental impact statements to be carried out at the local level. This structuring mechanism could be a regionally guided technology assessment function, given that such assessments subsume or encompass EIS as a subcomponent. In this in- stance, the quality checks are provided by the regional planning bodies. It is hoped that some cogent reasons for integrating planning and technology assessment on a regional level have been presented. The Operational aspects of such a process at this level presents a whole new field of investigation. 164 Fogtnotes 1U.S. National Commission of Technology, Automation and Economic Progress, Technology and the American Economy (Vol. I; Appendices, Vol. II-VI; WashingEOn, D.C.: Govern- ment Printing Office, 1966), p. 5. 2Institute on State Programming for the 1970's, State Planning: A Quest for Relevance (Chapel Hill, N.C.: University of’North CarOliha, I968). 3U.S. Congress, Senate, Committee on Government Operations, Statement by Harvey Brooks at Hearings before the Subcommittee on Government Research, Fred R. Harris, Chairman; 90th Congress, lst Session, on S.R. 110 (part 3; Washington, D.C.: Government Printing Office, 1967), p. 712. “Irene Taviss, Tgchnology and the City (Research Review NO. 5; Harvard University Program on'TechnOlogy and Society; Cambridge, Mass.: Harvard University Press, 1970), p. 1. 5Ibid. 6Private conversation with Sanford Farness, Professor, Michigan State University, June 1973. 7Todd LaPorte, "The Context of Technology Assess- ment: A Changing Perspective for Public Organization," Public Administrative Review, January/February 1971, p. 65. 8Ibid., p. 65. 9Ibid., p. 66. 1°Alan Altschuler, The City Planning Process (Ithaca, N.Y.: Cornell University Press, 1965), p. 392. 11Louis H. Mayo, "The Management of Technology Assessment," Technology Assessment: Understanding Social Consequences of:TechnOIOgical Application, ed. by Raphael G. Kasper (Neinork: Praeger, 1972), pp. 80ff. 12Martin V. Jones, The Methodology of Technology Assessment: Some Basic PrgpositiOns (Washington, D.C.: The Mitre Corporation, June 1971), p. 31. 13Altschuler, op. cit., pp. 306ff. 1“Ibid., p. 307. 165 15Franklin P. Huddle, "The Social Function Of Technology Assessment," Technology_Assessment: Understand- ing the Social Consequences of TechnoiogicaI Applications, ed. By RaphaeI’G. Kasper (New York: 'Praeger, 1972), p. 164. lsIbid., p. 156. 17Ibid. Huddle notes that criteria for goal setting should include the following as a minimum: 0 the goal proposed must command general respect; it must be generally regarded as worth doing; it must be difficult but not impossible; it must not be in overt conflict with potent residual myths; it must be arduous but not exorbitantly expensive; it must Offer opportunity for wide participation; it must offer a durable general motivation; progress toward it must be measurable in understandable terms; 0 the outcome must be judged desirable; and 0 in practice, the outcome must be tolerable. laMarvin J. Cetron and Donald N. Dick, "Measurement and Technology Assessment," Tpe Methodology of Technology Assessment, ed. by Marvin J. Cetron and’BOdo Bartoda (New York: Gordon & Breach, 1972), p. 107. 19Joseph F. Coates, "Technology Assessment: The Benefits, Costs and Consequences," The Futurist, December 20Jay W. Forrester, Urban Dynamics (Cambridge, Mass.: M.I.T. Press, 1971), p. 9. 21Ibid. 22Erich Jantsch, Technological Planning and Social Futures (London: Cassell/Associated Business Programs, I972), p. 220. 23Ibid., p. 222. 2“T. J. Cartwright, "Problems, Solution and Strategies: A Contribution to the Theory and Practice of Planning," Journal of the American Institute of Planners, XXXIV, NO. 3 (May 1973), 179-187. Carthight's paper suggests that the nature of a problem governs both the range of possible solu- tions to the problem and the kind of strategies appropriate for achieving those solutions. The argument centers on the definition of four fundamental types Of problems, or namely: simple problems, compound problems, complex problems, and metaproblems. Each of these problem types is held to entail 166 a corresponding kind of strategy. From this, it is concluded that planners face a persistent dilemna in trying to choose between a broad definition of their problem and an exact strategy for solving it. The closer they come to one objective, the further they get from the other. 25John McHale, World Facts and Trends (2nd ed.; New York: Collier Books, 1972), pp. 1-2. 26Karl W. Deutsch, John Platt, and Dieter Senghass, "Conditions Favoring Major Advances in Social Sciences," Science, February 5, 1971, p. 459. 27Harold P. Green, "Limitations on Implementation of Technology Assessment," Apomic Energy Law Journal, 1971, p. 81. . 28Jantsch, op. cit., p. 30. 29Sanford Farness, unpublished class notes (Michigan State University, 1971). Professor Farness proposes that technologies and techological effects could conceivably be classified by referring to "Levels Of Real Systems and Environments," i.e., System Types (modes of being): 1. Ideal (thought) . Self (transcendental ego) . Cultural (immaterial) . Personality (social self) . Social (institutional) Artifact Systems Biological Systems Physical Systems Chemical Systems. \oooqoxuweww O O 3°Jones, op. cit., pp. 137-208; and Mayo, op. cit., pp. 81-105. L. H. Mayo presents the following components Of an adequate technology assessment: Criteria: 1. Participants 2. Perspectives 3. Situations 4. Base Values 5. Outcomes 6. Effects a. social institutions b. values of citizenry c. physical environment d. basic decisional functions and structures of legal process e. assessment system (see Appendix D). 167 31Louis H. Mayo, "Scientific Method, Adversarial System and Technology Assessment" (Program of Policy Studies in Science and Technology; Washington, D.C.: George Washington University, November 1970), p. 19. 32Ibid. 33Ibid., p. 23 3“Arthur S. Miller quoted in Louis H. Mayo, "Scien- tific Method, Adversarial System and Technology Assessment," p. 19. 35Dennis W. Brezina quoted in Louis H. Mayo, "Scientific Method, Adversarial System and Technology Assessment," pp. 92-93. 36Louis H. Mayo, "Scientific Method, Adversarial System and Technology Assessment," p. 88. 37Ibid., p. 91. 38Ibid., pp. 74—79; and Harold P. Green, "The Adversary Process in Technology Assessment," Technology Agsessment: Understanding the Social Consequences Of Technological Applications, ed. by Raphael G. Kasper (Neinorki' Praeger, 1972), pp. 49-62. 39Laurence H. Tribe, "Legal Frameworks for the Assessment and Control of Technology," Minerva, IX (April 1971), 243-255. “oF. Stuart Chapin, Jr., Urban Land Use Planning (2nd ed.; Urbana, Ill.: Univer51ty of IlliHOis ress, 1970), pp. 126-137. “lRaymond A. Bauer, "Social Indicators," Planning for Diversipy and Choice, ed. by Stanford Anderson (Cambridge, Mass.: M.I.T. Press, 1968), p. 251. “21bid., p. 255. “3Vary Taylor Coates, "Examples Of Technology Assessments for the Federal Government" (Program of Policy Studies in Science and Technology; Washington, D.C.: George Washington University, 1970), p. 27. CHAPTER IV SUMMARY AND CONCLUSIONS Man's conception of science and technology has been changing and evolving from since the beginning of recorded history, yet throughout most of the ensuing time seldom has man's conception included a balanced view Of the failure and promise of technology. It is true, Of course, that man has evaluated technology, but until recently has failed to enlarge the context Of the perceived effects. That enlarging and emerging context of perceived effects is what this thesis has endeavored to describe and evaluate. It has been demonstrated that not only have con- cepts of man's evaluation or assessment of technology changed, but that little consensus has occurred in the development of modern definitions of technology assessment. The reality of the situation is that this plurality of views is necessary if the "perspectives and participants" goal of the adequacy model for assessments is to be achieved. Concomitantly, it must be kept in mind that all conceptions of the assessment process presented, and all of those reviewed but not pre- sented, realize to some extent the social context of technology assessment. 168 169 In Chapter I it was first proposed that the information generated by technology assessment could be used in numerous forums at various levels. It was seen that the U.S. Congress, state agencies and bodies and other govern- mental jurisdictional levels need and use such information. The primary use of technology assessment information at these levels would be as a tool to aid decision makers in reviewing and analyzing prOposals for the implementation of a variety of technological programs concerning the econ- omy and environment. As a result it was suggested that several types of groups should pursue technology assessment, private industry, government at many levels, academic institutions and citizen groups. It was found that due to the large diversity of Opinions, values and the complexity of the organizational make-up, that government agencies had particular adminis- trative problems in performing technology assessments. Ergo, it was concluded that such problems could be best overcome with the use of a very independent agency balanced on the consumer side by well endowed and organized citizens groups. It was found in Chapter I that in order to assure the quality or adequacy Of assessments they should be con- ducted in two time frames, first, a one-time total problem assessment and secondly, a series Of cumulative ongoing 170 studies, with enough time allowed in both approaches to repeat individual process steps several times. In discussing the methodologies of technology assessment it was discovered that of those presently being used as technology assessments, none was without serious deficiencies as to adequacy; cost-benefit analysis, envi- ronmental impact statements, and technology forecasting all exhibit some techniques and behavior that will have to be rectified in order to be used for technology assessment. Problems of dealing with non-economic costing, cultural factors, and elitist judgments are paramount with the use of these methodologies; however, modified forms of each will be necessary in the assessment process. Furthermore, it was determined that in the use of develOping methodologies, new orientations will have to be taken in order to use both "hard" and "soft" data. It was concluded that rational simulation of the natural environ- ment and the development Of a reliable set of social indi- cators would best fit these needs. It was also determined that these two methodological approaches would best serve as starting points for the integration of planning and assessment techniques. The discussion of normative approaches in technology assessment and planning present severe problems of truth, rationality and Optimality, but it appears now that norma- tive processes characterize planning and assessment at every 171 level, and will continue to do so. The determination of the character and form of these normative approaches appear to be one of the many fields of research concerning assessment open to planners. An endeavor was made to synthesize many of the ideas concerning assessment methodologies and administration; this was only a partial success for two reasons. First, few actual assessments have been performed in planning depart- ments and jurisdictions, organized explicitly for urban, state and regional planning. Second, written and other materials documenting any such attempts are difficult to obtain. Therefore, the conclusions reached and summarized below can only be held as tentative pending a more thorough review and analysis of new and proposed assessments. It was seen that the traditional view of planning toward technology or technology assessment was one concerned with the attraction of technology based industry and the evaluation of first order effects, if technology was con- sidered at all. It was then shown that with the increasing complexity of technology and the increasing uncertainty emanating from it, planners reacted to include technology in planning considerations in order to decrease uncertainty. The purposes and goals of planning were seen to be sometimes synonymous and clearly related to those Of tech- nology assessment. In fact, a close parallel between the 172 seven major steps of a technology assessment (described in Appendix C) and the background through implementation phases of the planning process (described in Appendix B), clearly exists and serves to reiterate the coterminous nature of both processes. These relationships, if explored in further detail, would most certainly present additional evidence that technology assessment is in fact subsumed under planning. As determined by the comparative overview of process components, the use of technology assessments that are adequate would be precluded in some jurisdictions due to factors of cost, degree of intelligibility and availability of competent staff. It was concluded from this overview that if technology assessment were instituted at regional or higher levels it would be most feasible and much more likely to be implemented as a new government process if allied with planning functions. In reality, a technology assessment function would have to operate on a multitude of levels, national, state, regional and local. It must be seen that the state and regional levels would act as data integration centers for the information gathered at the local or measurements level. In addition, planning entities could act as validating agents for regionally coordinated, but locally implemented technology planning; what is necessary is that the planning bodies at the local level provide a quality check on 173 regional assessments in the form of information feedback into the decisions made affecting the local area. A major point to be made was that technology assess- ment could be adapted to planning in both a very mundane and a unique way. It was shown that technology assessment could become an informational tool for planning, supplying infor- mation concerning both the environment and the social con- text Of planning. It was also proposed that the adversarial aspects common to several conceptions of technology assess- ments are analogous to those of planning and would serve as a unique source of information concerning the interrelation- ship of man and technology. In the final view it must be realized that all of the conclusions presented here can only be labeled tentative. Their primary purpose has been to elucidate trends and serve as beginning points for further research and analysis. For instance, a most rewarding pursuit would be to actually use some of the proposed methodologies for assessment at various planning agencies and evaluate the results. Or a survey could be taken to determine which planning agencies are now carrying on technology assessments and how it is being pursued. In the end it should be seen that the use of tech- nology assessment by planning will be increasing as the contextual analytical capabilities of such an orientation are realized. LIST OF REFERENCES LI ST OF REFERENCES Altschuler, Alan. The City Planning Process. Ithaca, New York: Cornell University Press, 1965. Anderson, Stanford. Planning for Diversity and Choice. Cambridge, Mass.: M.I.T. Press, 1968. Barlowe, Raleigh. Tgnd Resource Economics: The Political Egonomy Of Rural and Urban Land Resource Use. Englewood Cliffs, N.J.: Prentice4Hall, 1958. Bauer, Raymond A. "Social Indicators." Planning for Diversity and Choice. Edited by Stanford Anderson. Cambfidge, Mass.: M.I.T. Press, 1960. Branscomb, Lewis M. "Why PeOple Fear Technology." The Futurist, V, NO. 6 (December 1971), 232-233. Carpenter, Richard. "Technology Assessment and the Congress." Technology Assessment: Understanding the Social Consequences Oi_TechnOIOgicaI Applica- tions. EditedbyiRaphael C.:Rasper. New York: Praeger, 1972. Cartwright, T. J. "Problems, Solution and Strategies: A Contribution to the Theory and Practice of Planning." Journal of the American Institute Of Planners, XXXIX, NO. 3'TMay 1973). Cetron, Marvin J., and Dick, Donald N. "Measurement and Technology Assessment." The Methodology Of Tech- nolo Assessment. Edited by MarVin J. Cetron and Bodo Bartocha. New York: Gordon and Breach, 1972. Chapin, F. Stuart, Jr. Urban Land UsetPlanning. Urbana, 111.: University of Illinois Press,il970. Coates, Joseph F. "Technology Assessment: Benefits, Costs, Consequences." The Futurist, V, No. 6 (December 1971), 225-231. 174 175 Coates, Vary Taylor. "Examples of Technology Assessments For the Federal Government." Program of Policy Studies in Science and Technology. Washington, D.C.: George Washington University, January 1970. Coates, Vary Taylor. "Some Implications of the Technology Assessment Function for the Effective Public Deci- sion Making Process." Program Of Policy Studies in Science and Technology. Washington, D.C.: George Washington University, June 1971. Coates, Vary Taylor. Technology and Public Policy: A Process of TechnoIOgy'Assessment in the e eral Government. Simmary Report. VOI.’I. National Aeronautics and Space Administration Grant, National Science Foundation Research Applied to National Needs; Program of Policy Studies in Science and Technology. Washington, D.C.: George Washington University, 1972. Commoner, Barry. The Closing Circle. New York: Alfred A. Knopf, 1971. The Council of State Governments. Power to the States: Mgbilizin Public Technology. Lexingtbn, Ry.: Council O State Governments, May 1972. Craig, Don L. "Perspectives of State Planning." Unpub- lished paper, School of Architecture and Urban Planning, Michigan State University, 1972. Danhof, Clarence. "Assessment Information Systems." Technology Assessment: Understanding the Social COnsequences of TeChnologicaI Ap lications. Edited by—Raphael G. Rasper. NeinOrk: raeger, 1972. Detwyler, Thomas R. Man's Impact on the Environment. New York: McGraw-Hill, Inc., 1971. Deutsch, Karl W., Platt, John, and Senghass, Dieter. "Conditions Favoring Major Advances in Social Sciences." Science (February 5, 1971). Drucker, Peter F. Technology, Mana ement and Society. New York: Harper andiRow, I970. Duloga, Orlando. "The Emerging Law of Environmental Impact Statements: The Federal Role." Unpublished remarks of a seminar at the Annual Conference of the American Society Of Planning Officials, Los Angeles, California, April 1973. 176 Etzioni, Amatai. The Active Society: A Theory of Societal and Political Processes. New Yofk: The Free Press, Collier—MacMilIian,‘I968. Farness, Sanford. Unpublished class notes, Michigan State University, School of Landscape Architecture and Urban Planning, 1971. Farvar, M. Taghi, and Milton, John P. Theggareless Tech- nology: Ecology and International Development. Garden City, N.Y.: NaturEI History Press, 1972. Folk, Hugh. "The Role Of Technology Assessment in Public Policy." Paper presented at a Panel on Technology Assessment, Meetings of the American Association for the Advancement of Science, Boston, Massachusetts, December 29, 1969. Forrester, Jay W. Urban Dynamics. Cambridge, Mass.: M.I.T. Press, 1971. Forrester, Jay W. World Dynamics. Cambridge, Mass.: M.I.T. Press, 1972. Gamarra, Nancy T. Erroneous Predictions and Negative Com— ments Concerning Exploration,TerritorialExpansion, Sc1ent1fic and Technological DeveIOpment. Selected statements, Legislative Refhrence Service. Washington, D.C.: Library of Congress, May 1969. Gordon, Theodore J. The Current Methods of Futures Research. Paper P-ll. Middletown, Conn. and Menlo Park, Calif.: The Institute for the Future, August 1971. Green, Harold P. "The Adversary Process in Technology Assessment." Technology Assessment: Understanding phe Social Consequences Oi—TechnologicalApplications. Edited byPRaphaél G.’Rasper, NeinOrk: Praeger, 1972. Green, Harold P. "Limitations on Implementation of Technol- ogy Assessment." Program Of Policy Studies in Science and Technology. Washington, D.C.: George Washington University, April 1971. Green, Leon, Jr. "Technology Assessment or Technology Harassment." Unpublished paper presented at a Seminar of the Program Of Policy Studies in Science and Technology. Washington, D.C.: George Washing- ton University, March 26, 1970. 177 Hill, Morris. "A Goals-Achievement Matrix for Evaluating Alternate Plans." Decision Making in Urban Plannipg. Edited by Ira M. Robinson. Severly Hills,iCalifT: Sage Publications, 1968. Huddle, Franklin. "The Social Function of Technology Assessment." TechnologygAssessment: Understanding the Social Consegpences OfiTechnologiCal Applica- tions. Edited by Raphael G. Kasper, New York: Praeger, 1972. Huddle, Franklin. "The Social Management of Technological Consequences." The Futurist, VI, No. 1 (February 1972), 16-18. Institute on State Programming for the 1970's. State Plan- ning: A Quest for Relevance. Chapel Hill, N.C.: University Of North CarOlina, 1968. Jantsch, Erich. Technological Planning and Social Futures. London: Cassell Associated BusinessPrograms, 1972. Jones, Martin V., et al. A Technology Assessment Method- olo . Prepared in cooperation with and for’Ehe Office of Science and Technology, Executive Office of the President. By the Mitre Corporation, Falls Church, Virginia, June 1971. Kahn, Herman, and Wiener, A. The Year 2000. New York: MacMillian Co., 1967. Kidd, Charles V. "Technology Assessment in the Executive Office of the President." Technology Assessment: Understanding the Social Consequences of Technolog- ical ApplicatiOns. Edited by Raphael G. Kasper. New Yofk: Praeger, 1972. Kranzberg, Melvin. Historical Aspects of Technology Assess- ment. Program Of POlicy Studies in science and Technology. Washington, D.C.: George Washington University, August 1969. Kranzberg, Melvin, and Purcell, Carroll W., Jr., eds. Technology in Western Civilization. New York: Oxford’University Press, 1967. LaPorte, Todd R. "The Context of Technology Assessment: A Changing Perspective for Public Organization." Public Administration Review, January-February, 1971. Laserson, Nina. "Technology Assessment at the Threshold." Innovation, XXVII (January 1972). 178 Leopold, Luna 3., et al. "A Procedure for Evaluating Environmental Impact." Geological Survey Circular 645. Geological Survey, U.S. Department of the Interior, Washington, D.C., 1971. Maass, Arthur. "Benefit-Cost Analyses: Its Relevance to Public Investment Decisions." Quarterly Journal of Economics, LXXX (May 1966). Marcuse, Herbert. One Dimensional Man. Boston, Mass.: Beacon Press, 1964. Marx, Leo. The Machine in the Garden. Oxford: Oxford University Press, 1968. Mayo, Louis H. "Commentary on a Paper by Dr. Frederick Seitz." Harmonizipg_Technolo ical DevelOpments and Social Policy in’America. E ited by James C. ' Charlesworth. Monograph 11. Philadelphia: American Academy of Political and Social Science, 1970. Mayo, Louis H. "The Management Of Technology Assessment." Technology Assessment: Understanding the Social Consgguences of Technological ApplicatiOns. Edited by Raphael G.Rasper. New York: Praeger, 1972. Mayo, Louis H. "Scientific Method, Adversarial System, and Technology Assessment." GWPS-MON 5, PB 196-638. Program of Policy Studies in Science and Technology. Washington, D.C.: George Washington University, November 1970. McHale, John. World Facts and Trends. 2nd ed. New York: Collier Books, 1972. McHarg, Ian L. Design with Nature. Garden City, N.Y.: Doubleday/Natural History Press, Doubleday & Co., 1969. Mesthene, Emmanuel G. Technological Change: Its Impact on Man and Sociepy. Cambridge, Mass.: Harvard Univer- sity Press, 1970. Michael, Donald N. The Upprepared Societ : Planning for a Precarious Future. New York: Bas1c Books, 1968. Miller, Arthur. "Toward the Techno-Corporate State: An Essay in American Constitutionalism." Villanova Law Review, XIV, l, Villanova, Penn.: Villanova University Press, 1968, pp. 35-37. 179 Mottour, Ellis. "Technology Assessment and Citizen Action." Technology Assessment:‘ Understanding the Social Consequences OT Technological’Applications. ‘Edited by—Raphael G. Kasper. New York: Praeger, 1972. Mumford, Lewis. "The Paleotechnic Phase." Technics and Civilization. New York: Harcourt, Brace & Co., 1934. National Academy of Sciences/National Research Council/ Committee on Oceanography. Economic Benefit from Oceanographic Research. Publication 1228. Wa§hihg- ton, D.C.: GOvernment Printing Office, 1964. Overly, Don H. "Societal Indicators and Technology Assess- ment." The Methogplogy of Technolo Assessment. Edited by Marvin J. Cetron andlBodo Bartocha. New York: Gordon and Breach, 1972. Ozbekhan, Hasan. "Toward a General Theory of Planning." Perspectives of Planning. Edited by Erich Jantsgh. Paris: Organization for Economic Cooperation an Development, 1969. Ozbekhan, Hasan. "The Triumph Of Technology: 'Can' Implies 'Ought.'" Plannipg for Diversity and Choice. Edited by Stanford Anderson. Cambridge, Mass.: M.I.T. Press, 1968. Peat, Marwick, Mitchell and Co. A Survgy of Technology Assessment Today. Washington, D.C.: Prepared for the National Science Foundation, 1972. Rowe, William D. "The Environment: A Systems Approach with Emphasis on Monitoring." The Methodology of Tech- pology Assessment. Edited by Marvin J. Cetron and Bodo Bartocha. New York: Gordon and Breach, 1972. Steward, Julian H. Theory Culture Change: The Methodology fi Of_Multilinear Evolutigp. Urbana, 111.: University of Illinois Press, 1967. Stuart, Robert C. "Interdisciplinary Conference on Environmental Impact Statement Recommends Better Research, Coordination, and More Responsible Evalua- tion." Newsletter, American Institute Of Planners, VIII, NO. 1 (January, 1973). Taviss, Irene. Technology and the City. Research Review, NO. 5. Harvard University Program on Technology and Society. Cambridge, Mass.: Harvard University Press, 1970. " 180 Toffler, Alvin. Future Shock. New York: Random House, 1970. Tribe, Laurence H. "Legal Frameworks for the Assessment and Control of Technology." Minerva, IX (April, 1971). Tribe, Laurence H. "Towards a New Technological Ethic: The Role of Legal Liability." Impact of Science on Society, XXI (July-September 1971). Congress. House. Committee on Science and Astro- nautics. Technolggy: Processes of Assessment and Choice. Report of the NatiOnal Academy Of Sc1ences Washington, D.C.: Government Printing Office, July 1969. Congress. Senate. Committee on Government Operations. Statement by Harvey Brooks at Hearings before the Subcommittee on Government Research (Fred R. Harris, Chairman). 90th Congress, lst Session, on S.R. 110 (part 3). Washington, D.C.: Government Printing Office, 1967. Council on Environmental Quality. Third Annual Re ort. Washington, D.C.: Government Printing Office, E972. Department of Housing and Urban Development. Draft: Environmental Clearance Worksheet, Appendix B-I. Washington, D.C.: Government Printing Office, 1972. Environmental Protection Agency. Quality of Life Indi- cators: A Review of State-Of-the-Art and Guidelines Derived to Assist in Developing Environmental°Indi- cators. Washington, D.CT: Government‘Printing Office, 1972. National Commission on Technology, Automation and Eco- nomic Progress. "Technology and the American Econ- omy." Report of the U.S. National Commission on Technology, Automation and Economic Progress. Washington, D.C.: Government Printing Office, 1966. Statutes. Public Law 91-190, 83 statutes, 852, January 1970. Stagutes. Public Law 92-484, 86 statutes, 797, 1972. 181 Wartofsky, Marx. "Telos and Technique: Models as Modes of Action." Planning for Diversity and Choice. Edited by Stanford’Anderson. Cambridge, Mass.: M.I.T. Press, 1968. White, Lynn. Medieval Technology and Social Change. New York: Oxford University Press, 1966. APPENDICES EXPLANATION OF APPENDIX A Appendix A portrays the "technology assessment process." The top half of this appendix, entitled "The Context of Technological Assessment": 1. shows how various factors, both those that are related to technology (A) and those that are not (B), interact to produce "Societal Problems and Opportunities" (D) that in turn, create the need for technology assessment studies; and 2. identifies the enabling social mechanisms (e.g., the institutional channels (C) through which the findings of a technology assessment study get reviewed and possibly implemented. The bottom half of Appendix A, entitled "Elements of Technology Assessment," relates the analytical inputs (E) that go into a technology assessment study to the analytical outputs (G) generated by a study. 1. The inputs (E), the professional knowledge, that provides the information base for technology assess- ment studies, consist of: . Assessment Methodology--concepts, multidiscipline problem-solving techniques, and analytical procedures. 182 183 0 Pilot and Prior Assessment Studies--which provide exploratory overviews of the basic issues and problems that should be examined in greater depth by comprehensive technology assessment studies. Sometimes there are also available for reference relatively comprehensive assessment studies completed previously on the same or similar technologies. 0 Documented Empirical and Experimental Research-- produced by laboratories, by social-science sur- veys, and as by-products by practicing, profes- sions (e.g., medicine). Example, health effects Of exposure to various air pollutants. 0 Expert Opinion--the Opinions of expert scien- tists (including social scientists) for the latest research findings not yet professionally documented and/or for an interpretation or extrapolation of the documented data base. The right-hand box Of the bottom portion of Appendix A summarizes the types of outputs (G) produced by a typical technology assessment study. These consist principally of: 0 Descriptive Findings--cover major issues bearing on the assessment study, a description Of the technologies being assessed, a description of the overall societal state in which the assessed 184 technology will be embedded, and a projection of the initial and secondary impacts that the technology will have. 0 Analyzed Action Options--an analysis of how governments and citizens can maximize the opportunities and minimize the problems gener- ated by a technology. This analysis enumerates benefits, costs, and side effects of various action Options. 3. Analytical procedures, identified as (F) in Appen- dix A, produce the outputs (G) from the inputs (E). The evaluation, and possible implementation of Action Options generated by technology assessment studies are the function of public and private institutions (see Enabling Social Mechanisms (C)). These institutions have numerous standard "activities" (research and development, finance, manufacturing, marketing, control, education, etc.). The assessed technologies and the action Options may both impact on the "activities." Public Opinion is another "enabling" social mechanism. Source: Martin V. Jones, The Methodology of Technology Assessment (Washington, D.C.: The Mitre Corpo- ration, 1971). 185 r------------O-------C--“1 :5 muomuum «new mumOU muHuosom usoHumo soHuuc vouaHusd nuommEH OODMQHOHust mousum HmuoHoom wmmHUch 0>HumHuommo noHtsum mo musmuso BZMmemmmd wwoqozmumfi m0 mBzmzmAm Ame Ame GOHGHQO Uhomxm nouuomom HousoaHuoaxm one HMOHMHQEM oeucossuoo mGOHuuHuomwo xooHoonowB mwsmm o :95 m , H m no u um All'Allli:~:¢ moHosum usosmmomwm >UOHooonuo: ucmammommm .uoHHa meansaucsv moans mmeusum on uadch L-----—--—----—-—-----—------ H meuHcduuommo one maanoum ones» onaHosd ou mH mecsum usosmmommd >moHocnooa uo m>Huowmeo ------------------------—----1 _ any moHuHsmuuommo —\ use waanoum Bzmzmmmmm< MQOHOZEUWB ho BZHBZOU HEB HsUOHoom Hoe .090 .ccHusosom Houucou uCHumxuoz :OHcho OHHnsm mcHuduosmscsz >uum5OCHscoz OOGMCHm ucoscuw>oo o a m mmHumuousm wus>Hum mOHuH>Hu0¢ MCOHuDUHuwsH AsoHususoEoHQEH use :oHusoHs>o. msmacmnomz Hmsoom oceansqm waned >s35mHz msoHuouHumcH qumcoo coHusHsmom mnmuumoswo Adv humusm OHHndm HsHoom ucos>oHQau OHaosoom HH¢\HHom OGHuomEOU >uHHsso uHa useEoqum ,AIIIIV, mHsHuouoz ocHuHommsm >UHHHboz Hecowuom OHMOOImosHs> O>Huoaous< HOnmz mOHmasxm moHuomousO monamxm moHuomouso msoHusuoansou HouHmoHosnoousoz monoHoczooa mo xuouso>su .1 mmmuomm Bzmzmmmmmfl MOOHOZEUWB < < xHszmmfl 186 coHumsHs>m \ :OHumuumHoHEO< coHumusmeHmEH mwsuHHHomm suscsssoo soHumuuommcmue mm: ocmq mcooz 008mm smHm moHHom soHumHsuom coHumNHcmmuo some mOHOHHom OHfiocoom moumocmum mm>Huowmno ucmEcOHH>sm Hensumz smHme mcsum mOHQHocHum mHsOO mmmmmmw >O>Hsm OHmsm smHmoo \. mHsOu mHmemse muse mucuso>oH osdoumxosm II mmmoomm UZHZZde QWNHHdmmsz 4 m xHflzmmmd "mmHosum mo mmHmEsxm STEP 1: STEP 2: STEP 3: STEP 4: APPENDIX C SEVEN MAJOR STEPS IN MAKING A TECHNOLOGY ASSESSMENT Define The Assessment Task Discuss relevant issues and any major problems Establish scope (breadth and depth) of inquiry Develop project ground rules Describe Relevant Technologies Describe major technology being assessed Describe other technologies supporting the major technology Describe technologies competitive to the major and supporting technologies Develop State-Of-Society Assumptions Identify and describe major nontechnological factors influencing the application of the relevant technologies Identify Impact Areas Ascertain those societal characteristics that will be most influenced by the application Of the assessed technology 187 188 STEP 5: Make Preliminary Impact Analysis Trace and integrate the process by which the assessed technology makes its societal influence felt STEP 6: Identify Possible Action Options Develop and analyze various programs for Obtaining maximum public advantage from the assessed technologies STEP 7: Complete Impact Analysis Analyze the degree to which each action option would alter the specific societal impacts of the assessed technology discussed in Step 5. Source: Marvin V. Jones, A Technology Assessment Method- ology: Some Basic Propositions, Mitre Corporation, 1971. APPENDIX D ADEQUATE TECHNOLOGY ASSESSMENT CRITERIA The adequacy Of an assessment can be expressed in terms of the Information Selection Operations and the Decisional Procedural Operations of the assessment entity. The following criteria have relevance to Information Selection: 1. Availability and timeliness of data. 2. Economy of data (cost of Obtaining as related to value). 3. Dependability (accuracy, reliability). 4. Comprehensiveness (contextually, systematic). 5. Openness (Opportunity for participation). The adequacy of the application of such information to the assessment process can be measured in terms of the attention to and quality of analysis of the following options in the Decisional Phase: 1. Specification of the social objectives to be achieved by the proposed technological application. 2. Controlling contextual factors: 189 190 . Objectives and authority Of the assessment forum. 0 demands of participants. 0 resources available. . relevant institutional framework. - customary practices in the social context. 0 influential trends affecting the implementation of the proposed application. 3. Consideration of alternative prOposals designed to achieve the same or similar social Objectives. 4. The projection Of the probable outcomes Of each alternative proposal. 5. The prediction Of specific consequences of each outcome. 6. Cost-Benefit assessments of the alternative proposals in terms of an explicit schema of social norms. These criteria, or a similar scheme of criteria of adequacy more suitable for particular types of assessment contexts, can be applied to the performance evaluation of: (1) a specific assessment, taking into account the various constraints which may limit the scope of the assessment; or to the evaluation of the adequacy of (2) a total Impact Assessment, whether performed at a given point in time by 191 one assessment entity or by an aggregate of assessment entities through a period of time. Source: Louis H. Mayo, "Scientific Method, Adversarial System, and Technology Assessment,” Program of Policy Studies in Science and Technology (Washington, D.C.: George Washington University, 1970).