What does a laser engineer DO

A laser engineer is a scientist or engineer who has extensive training in designing, building, operating, and maintaining high-energy manufacturing or research laser equipment.

The qualifications for this position will likely depend upon the particular area in which the engineer works. Those in a research and development position will likely be designing new laser technology, improving existing technology, and developing new products. Those in manufacturing will be more hands-on, building products and parts and designing processes that utilize solid state laser technology.

Educational requirements will vary among employers, but most laser engineers have a bachelor's degree in some type of science-related field, like physics, engineering, laser technology or optics. Some laser engineers have a master's degree or even a PhD depending on the position.

Most laser engineers have experience in the field of laser product development, laser applied research, or modeling solid state lasers in an industrial or manufacturing context. Solving problems in the use of laser equipment is one of the basic responsibilities for a laser engineer.

Diagnosing technical problems and using test equipment to fix lasers is part of this job. A laser engineer working in a manufacturing context will likely have good interpersonal skills because he communicates frequently with operators and vendors.

A laser engineer working in a research context is more likely to focus on the conceptual and creative aspect of laser technology, so collaborative skills and teamwork are more important.

One of the primary job duties of a laser engineer is working with specialized computers to program basic functions for laser equipment, enter data, and create software to work with the laser equipment. Along with the standard business programs, a laser engineer needs a working knowledge of Autocad®, materials resource planning, product data management, and other engineering-related software.

Laser engineers working in the manufacturing area create or review technical drawings and coordinate with manufacturing personnel to determine the actual steps and type of lasers necessary in various manufacturing processes.

A research laser engineer also needs to create and review detailed technical drawings to be used in the design and creation of laser technology and equipment.

Notes

1. Hands-on – практический.

2. PhD – от Doctor of Philosophy; = Ph.D. а) доктор философии (учёная степень; примерно соответствует степени кандидата наук в РФ.

DIGITAL MAPPING

Digital mapping (also called digital cartography) is the process by which a collection of data is compiled and formatted into a virtual image. The primary function of this technology is to produce maps that give accurate representations of a particular area, detailing major road arteries and other points of interest. The technology also allows the calculation of distances from one place to another

Though digital mapping can be found in a variety of computer applications, such as Google Earth, the main use of these maps is with the Global Positioning System, or GPS satellite network, used in standard automotive navigation systems.

History. The roots of digital mapping lie within traditional paper maps. Paper maps provide basic landscapes similar to digitized road maps, yet are often cumbersome, cover only a designated area, and lack many specific details such as road blocks. In addition, there is no way to “update” a paper map except to obtain a new version.

On the other hand, digital maps, in many cases, can be updated through synchronization with updates from company servers. Early digital maps had the same basic functionality as paper maps – that is, they provided a “virtual view” of roads generally outlined by the terrain encompassing the surrounding area.

However, as digital maps have grown with the expansion of GPS technology in the past decade, live traffic updates, points of interest and service locations have been added to enhance digital maps to be more “user conscious”.

Traditional “virtual views” are now only part of digital mapping. In many cases, users can choose between virtual maps, satellite (aerial views), and hybrid (a combination of virtual map and aerial views) views. With the ability to update and expand digital mapping devices, newly constructed roads and places can be added to appear on maps.

Data Collection. Digital maps heavily rely upon a vast amount of data collected over time. Most of the information that comprise digital maps is the culmination of satellite imagery as well as street level information. Maps must be updated frequently to provide users with the most accurate reflection of a location.

 While there is a wide spectrum on companies that specialize in digital mapping, the basic premise is that digital maps will accurately portray roads as they actually appear to give "life-like experiences".

Functionality and Use. Computer programs and applications such as Google Earth and Google Maps provide map views from space and street level of much of the world. Used primarily for recreational use, Google Earth provides digital mapping in personal applications, such as tracking distances or finding locations.

The development of mobile computing (tablet PCs, laptops, etc.) has recently (since about 2000) spurred the use of digital mapping in the sciences and applied sciences.

As of 2009, science fields that use digital mapping technology include geology, engineering, architecture, land surveying, mining, forestry, environment, and archaeology. The principal use by which digital mapping has grown in the past decade has been its connection to Global Positioning System (GPS) technology.

GPS is the foundation behind digital mapping navigation systems. The coordinates and position as well as atomic time obtained by a terrestrial GPS receiver from GPS satellites orbiting the Earth interact together to provide the digital mapping programming with points of origin in addition to the destination points needed to calculate distance.

This information is then analyzed and compiled to create a map that provides the easiest and most efficient way to reach a destination.

More technically speaking, the device operates in the following manner: GPS receivers collect data from "at least twenty-four GPS satellites" orbiting the Earth, calculating position in three dimensions.

1. The GPS receiver then utilizes position to provide GPS coordinates, or exact points of latitudinal and longitudinal direction from GPS satellites.

2. The points, or coordinates, output an accurate range between approximately "10-20 meters" of the actual location.

3. The beginning point, entered via GPS coordinates, and the ending point, (address or coordinates) input by the user, are then entered into the digital map.

4. The map outputs a real-time visual representation of the route. The map then moves along the path of the driver.

5. If the driver drifts from the designated route, the navigation system will use the current coordinates to recalculate a route to the destination location.

Notes

1. Satellite imagery – получение изображений с помощью ИСЗ.

2. Life-like experiences – реальное представление.

3. Tablet PC – планшетный ПK; карманный ПК.

CARTOGRAPHY

Cartography (from Greek chartis = map and graphein = write) is the study and practice of making maps. Combining science, aesthetics, and technique, cartography builds on the premise that reality can be modeled in ways that communicate spatial information effectively.

The fundamental problems of traditional cartography are to: Set the map's agenda and select traits of the object to be mapped. This is the concern of map editing. Traits may be physical, such as roads or land masses, or may be abstract, such as toponyms or political boundaries.

Represent the terrain of the mapped object on flat media. This is the concern of map projections. Eliminate characteristics of the mapped object that are not relevant to the map's purpose and reduce the complexity of the characteristics that will be mapped. This is the concern of generalization. Orchestrate the elements of the map to best convey its message to its audience.

This is the concern of map design. Modern cartography is closely integrated with geographic information science (GIS science) and constitutes many theoretical and practical foundations of geographic information systems.

Technological changes. Mapping can be done with GPS and laser rangefinder directly in the field (for example by Field-Map technology). Real-time map construction improves productivity and quality of mapping. Image is showing mapping of forest structure (position of trees, dead wood and canopy).

In cartography, technology has continually changed in order to meet the demands of new generations of mapmakers and map users.

The first maps were manually constructed with brushes and parchment; therefore, varied in quality and were limited in distribution. The advent of magnetic devices, such as the compass and much later, magnetic storage devices, allowed for the creation of far more accurate maps and the ability to store and manipulate them digitally. Advances in mechanical devices such as the printing press, vernier, allowed for the mass production of maps and the ability to make accurate reproductions from more accurate data.

Optical technology, such as the telescope, other devices that use telescopes, allowed for accurate surveying of land and the ability of mapmakers and navigators to find their latitude by measuring angles to the North Star at night or the sun at noon.

Advances in photochemical technology, such as the lithographic and photochemical processes, have allowed for the creation of maps that have fine details, do not distort in shape and resist moisture and wear. This also eliminated the need for engraving, which further shortened the time it takes to make and reproduce maps.

Advances in electronic technology in the 20th century ushered in another revolution in cartography. Ready availability of computers and peripherals such as monitors, plotters, printers, scanners (remote and document) and analytic stereo plotters, along with computer programs for visualization, image processing, spatial analysis, and database management, have democratized and greatly expanded the making of maps.

The ability to superimpose spatially located variables onto existing maps created new uses for maps and new industries to explore and exploit these potentials.

These days most commercial-quality maps are made using software that falls into one of three main types: CAD, GIS and specialized illustration software. Spatial information can be stored in a database, from which it can be extracted on demand. These tools lead to increasingly dynamic, interactive maps that can be manipulated digitally. With the field rugged computers, GPS and laser rangefinders, it is possible to perform mapping directly in the terrain.

The construction of the map in real time improve productivity and quality of the result. Real time mapping is done for example with Field-map technology. Map types.

In understanding basic maps, the field of cartography can be divided into two general categories: general cartography and thematic cartography. General cartography involves those maps that are constructed for a general audience and thus contain a variety of features.

General maps exhibit many reference and location systems and often are produced in a series. For example, the 1:24,000 scale topographic maps of the United States Geological Survey (USGS) are a standard as compared to the 1:50,000 scale Canadian maps.

The government of the UK produces the classic 1:50,000 (replacing the older 1 inch to 1 mile) "Ordnance Survey" maps6 of the entire UK and with a range of correlated larger- and smaller-scale maps of great detail.

Thematic cartography involves maps of specific geographic themes, oriented toward specific audiences. As the volume of geographic data has exploded over the last century, thematic cartography has become increasingly useful and necessary to interpret spatial, cultural and social data.

An orienteering map combines both general and thematic cartography, designed for a very specific user community. The most prominent thematic element is shading, that indicates degrees of difficulty of travel due to vegetation.

The vegetation itself is not identified, merely classified by the difficulty that it presents.

A topographic map is primarily concerned with the topographic description of a place, including (especially in the 20th and 21st centuries) the use of contour lines showing elevation.

Terrain or relief can be shown in a variety of ways. A topological map is a very general type of map, the kind you might sketch on a napkin. It often disregards scale and detail in the interest of clarity of communicating specific route or relational information. Beck's London Underground map is an iconic example.

Though the most widely used map of "The Tube," it preserves little of reality: it varies scale constantly and abruptly, it straightens curved tracks, and it contorts directions.

The only topography on it is the River Thames, letting the reader know whether a station is north or south of the river. That and the topology of station order and interchanges between train lines are all that is left of the geographic space. Yet those are all a typical passenger wishes to know, so the map fulfils its purpose.

Map symbology. The quality of a map's design affects its reader's ability to extract information and to learn from the map.

Cartographic symbology has been developed in an effort to portray the world accurately and effectively convey information to the map reader. A legend explains the pictorial language of the map, known as its symbology.

The title indicates the region the map portrays; the map image portrays the region and so on. Although every map element serves some purpose, convention only dictates inclusion of some elements, while others are considered optional.

A menu of map elements includes the neatline (border), north arrow, overview map, bar scale, projection and information about the map sources, accuracy and publication. When examining a landscape, scale can be intuited from trees, houses and cars.

Not so with a map. Even such a simple thing as a north arrow is crucial. It may seem obvious that the top of a map should point north, but this might not be the case.

Map coloring is also very important. How the cartographer displays the data in different hues can greatly affect the understanding or feel of the map. Different intensities of hue portray different objectives the cartographer is attempting to get across to the audience.

Today, personal computers can display up to 16 million distinct colors at a time. This fact allows for a multitude of color options for even the most demanding maps. Moreover, computers can easily hatch patterns in colors to give even more options. This is very beneficial, when symbolizing data in categories such as quintile and equal interval classifications.

Map projections. The Earth being spherical, any flat representation generates distortions, where shapes, distances, and areas cannot all be conserved simultaneously.

The mapmaker must choose a suitable map projection according to the space to be mapped and the purpose of the map.

Notes

1. Laser rangefinder – лазерный дальномер.

2. Parchment – пергамент, пергаментная бумага.

3. Magnetic storage – магнитное запоминающее устройство.

4. Printing press – печатная машина; печатный станок.

5. Rugged computer – дооборудованный, с особой надёжностью и т. п.

6. Ordnance Survey map – карта Великобритании или Ирландии, выпускаемая картографическим управлением этих стран.

7. The Tube – метрополитен, подземка (в Лондоне).

8. Bar scale – масштабная шкала.

9. Map coloring – раскрашивание карты; раскраска карты.

AERIAL PHOTOGRAPHY

Aerial photography is the taking of photographs of the ground from an elevated position. The term usually refers to images in which the camera is not supported by a ground-based structure. Cameras may be handheld or mounted, and photographs may be taken by a photographer, triggered remotely or triggered automatically.

Platforms for aerial photography include fixed-wing aircraft, helicopters, balloons, blimps and dirigibles, rockets, kites, poles, parachutes, and vehicle mounted poles.

Aerial photography should not be confused with Air-to-Air Photography, when aircraft serve both as a photo platform and subject.

Aerial photography is used in cartography (particularly in photogrammetric surveys, which are often the basis for topographic maps), land-use planning, archaeology, movie production, environmental studies, surveillance, commercial advertising, conveyancing, and artistic projects.

In the United States, aerial photographs are used in many Phase Environmental Site Assessments for property analysis. Aerial photos are often processed using GIS software.

Oblique photographs. Photographs taken at an angle are called oblique photographs. If they are taken from a low angle earth surface–aircraft, they are called low oblique and photographs taken from a high angle are called high or steep oblique.

Vertical photographs. Vertical photographs are taken straight down. They are mainly used in photogrammetry and image interpretation. Pictures that will be used in photogrammetry are traditionally taken with special large format cameras with calibrated and documented geometric properties.

Combinations. Aerial photographs are often combined. Depending on their purpose it can be done in several ways: panoramas can be made by stitching several photographs taken with one handheld camera; in pictometry five rigidly mounted cameras provide one vertical and four low oblique pictures that can be used together.

In some digital cameras for aerial photogrammetry images from several imaging elements, sometimes with separate lenses, are geometrically corrected and combined to one image in the camera.

Orthophotos. Vertical photographs are often used to create orthophotos, photographs which have been geometrically "corrected" so as to be usable as a map.

In other words, an orthophoto is a simulation of a photograph taken from an infinite distance, looking straight down to nadir. Perspective must obviously be removed, but variations in terrain should also be corrected for.

Multiple geometric transformations are applied to the image, depending on the perspective and terrain corrections required on a particular part of the image.

Orthophotos are commonly used in geographic information systems, such as those used by mapping agencies (e.g. Ordnance Survey) to create maps.

Once the images have been aligned, or 'registered', with known real-world coordinates, they can be widely deployed. Large sets of orthophotos, typically derived from multiple sources and divided into "tiles" (each typically 256 x 256 pixels in size), are widely used in online map systems such as Google Maps.

OpenStreetMap offers the use of similar orthophotos for deriving new map data. Google Earth overlays orthophotos or satellite imagery onto a digital elevation model to simulate 3D landscapes.

Aerial video. With advancements in video technology, aerial video is becoming more popular. Orthogonal video is shot from aircraft mapping pipelines, crop fields, and other points of interest.

Using GPS, video may be embedded with meta data and later synced with a video mapping program. This ‘Spatial Multimedia’ is the timely union of digital media including still photography, motion video, stereo, panoramic imagery sets, immersive media constructs, audio, and other data with location and date-time information from the GPS and other location designs.

Aerial videos are emerging Spatial Multimedia which can be used for scene understanding and object tracking. The input video is captured by low-flying aerial platforms and typically consists of strong parallax from non-ground-plane structures.

The integration of digital video, global positioning systems (GPS) and automated image processing will improve the accuracy and cost-effectiveness of data collection and reduction.

Notes

1. Aerial photography – аэрофотосъёмка.

2. Blimp – аэростат.

3. Air-to-air – воздух-воздух, воздухо-воздушный.

4. Surveillance – наблюдение, контроль.

5. Oblique photograph – перспективный аэроснимок.

6. Low oblique photograph – перспективный аэрофотоснимок без захвата линии горизонта.

7. High oblique photograph – перспективный аэрофотоснимок с захватом линии горизонта.

8. Pictometry – пиктометрия (уникальная информационная система, объединяющая аэрофотосъемку и программное обеспечение, способное обеспечить рассмотрение объектов с разных сторон и в разных масштабах).

9. Nadir – надир (точка, противоположная зениту).

10. Ordnance Survey – картографическое управление (в Великобритании и Ирландии).

11. Satellite imagery – получение изображений с помощью ИСЗ.

12. Digital elevation model – цифровая модель местности.

 

 

PART III

Home-reading texts

Computers

Generally, any device that can perform numerical calculations, even an adding machine, may be called a computer but nowadays this term is used especially for digital computers. Computers that once weighed 30 tons now may weigh as little as 1.8 kilograms. Microchips and microprocessors have considerably reduced the cost of the electronic components required in a computer. Computers come in many sizes and shapes such as special-purpose, laptop, desktop, minicomputers, supercomputers.

Special-purpose computers can perform specific tasks and their operations are limited to the programmes built into their microchips. There computers are the basis for electronic calculators and can be found in thousands of electronic products, including digital watches and automobiles. Basically, these computers do the ordinary arithmetic operations such as addition, subtraction, multiplication and division.

General-purpose computers are much more powerful because they can accept new sets of instructions. The smallest fully functional computers are called laptop computers. Most of the general-purpose computers known as personal or desktop computers can perform almost 5 million operations per second.

Today's personal computers are know to be used for different purposes: for testing new theories or models that cannot be examined with experiments, as valuable educational tools due to various encyclopedias, dictionaries, educational programmes, in book-keeping, accounting and management. Proper application of computing equipment in different industries is likely to result in proper management, effective distribution of materials and resources, more efficient production and trade.

Minicomputers are high-speed computers that have greater data manipulating capabilities than personal computers do and that can be used simultaneously by many users. These machines are primarily used by larger businesses or by large research and university centers. The speed and power of supercomputers, the highest class of computers, are almost beyond comprehension, and their capabilities are continually being improved. The most complex of these machines can perform nearly 32 billion calculations per second and store 1 billion characters in memory at one time, and can do in one hour what a desktop computer would take 40 years to do. They are used commonly by government agencies and large research centers. Linking together networks of several small computer centers and programming them to use a common language has enabled engineers to create the supercomputer. The aim of this technology is to elaborate a machine that could perform a trillion calculations per second.

 Questions:

1. What are the main types of computers?

2. How do the computers differ in size and methods of their application?

3. What are the main trends in the development of the computer technology?

The early years

Until the late 1970s, the computer was viewed as a massive machine that was useful to big business and big government but not to the general public. Computers were too cumbersome and expensive for private use, and most people were intimidated by them. As technology advanced, this was changed by a distinctive group of engineers and entrepreneurs who rushed to improve the designs of then current technology and to find ways to make the computer attractive to more people. Although these innovators of computer technology were very different from each other, they had a common enthusiasm for technical innovation and the capacity to foresee the potential of computers. This was a very competitive and stressful time, and the only people who succeeded were the ones who were able to combine extraordinary engineering expertise with progressive business skills and an ability to foresee the needs of the future.

Much of this activity was centered in the Silicon Valley in northern California where the first computer-related company had located in 1955. That company attracted thousands of related businesses, and the area became known as the technological capital of the world. Between 1981 and 1986, more than 1000 new technology-oriented businesses started there. At the busiest times, five or more, new companies started in a single week. The Silicon Valley attracted many risk-takers and gave them an opportunity to thrive in an atmosphere where creativity was expected and rewarded.

Robert Noyce was a risk-taker who was successful both as an engineer and as an entrepreneur. The son of an Iowa minister, he was informal, genuine, and methodical. Even when he was running one of the most successful businesses in the Silicon Valley, he dressed informally and his office was an open cubicle that looked like everyone else's. A graduate of the Massachusetts Institute of Technology (MIT), he started working for one of the first computer-related businesses in 1955. While working with these pioneers of computer engineering, he learned many things about computers and business management.

As an engineer, he co-invented the integrated circuit, which was the basis for later computer design. This integrated circuit was less than an eighth of an inch square but had the same power as a transistor unit that was over 15 inches square or a vacuum tube Unit that was 6.5 feet square. As a businessman, Noyce co-founded Intel, one of the most successful companies in the Silicon Valley and the first company to introduce the microprocessor. The microprocessor chip became the heart of the computer, making it possible for a large computer system that once filled an entire room to be contained on a small chip that could be held in one's hand. The directors of Intel could not have anticipated the effects that the microprocessor would have on the world. It made possible the invention of the personal computer and eventually led to the birth of thousands of new businesses. Noyce's contributions to the development of the integrated circuit and the microprocessor earned him both wealth and fame before his death in 1990. In fact, many people consider his role to be one of the most significant in the Silicon Valley story.

The two men who first introduced the personal computer (PC) to the marketplace had backgrounds unlike Robert Noyce's. They had neither prestigious university education nor experience in big business. Twenty-year-old Steven Jobs and twenty-four-year-old Stephen Wozniak were college' drop-outs who had collaborated on their first project as computer hobbiests in a local computer club. Built in the garage of Jobs's parents, this first personal computer utilized the technology of Noyce's integrated circuit. It was typewriter-sized, as powerful as a much larger computer, and inexpensive to build. To Wozniak the new machine was a gadget to share with other members of their computer club. To Jobs, however, it was a product with great marketing potential for homes and small businesses. To raise the $1300 needed to fill their first orders Jobs sold his Volkswagen bus and Wozniak sold his scientific calculator. Wozniak built and delivered the first order of 100 computers in ten days. Lacking funds, he was forced to use the least expensive materials, the fewest chips, and the most creative arrangement of components. Jobs and Wozniak soon had more orders than they could fill with their makeshift production line.

Jobs and Wozniak brought different abilities to their venture: Wozniak was the technological wizard, and Jobs was the entrepreneur. Wozniak designed the first model, and Jobs devised its applications and attracted interest from investors and buyers. Wozniak once admitted that without Jobs he would never have considered selling the computer or known how to do it. "Steve didn't do one circuit, design or piece of code. He's not really been into computers, and to this day he has never gone through a computer manual. But it never crossed my mind to sell computers. It was Steve who said, 'Let's hold them up and sell a few.

From the very beginning, Apple Computer had been sensitive to the needs of a general public that is intimidated by high technology. Jobs insisted that the computers be light, trim, and made in muted colors. He also insisted that the language used with the computers be "user-friendly" and that the operation be simple enough for the average person to learn in a few minutes. These features helped convince a skeptical public that the computer was practical for the home and small business. Jobs also introduced the idea of donating Apple Computers to thousands of California schools, thereby indirectly introducing his product into the homes of millions of students. Their second model, the Apple II, was the state-of-the-art PC in home and small business computers from 1977 to 1982. By 1983 the total company sales were almost $600 million, and it controlled 23 percent of the worldwide market in personal computers.

As the computer industry began to reach into homes and small businesses around the world, the need for many new products for the personal computer began to emerge. Martin Alpert, the founder of Tecmar, Inc., was one of the first people to foresee this need. When IBM released its first personal computer in 1981, Alpert bought the first two models. He took them apart and worked twenty-four hours a day to find out how other products could be attached to them. After two weeks, he emerged with the first computer peripherals for the IBM PC, and he later became one of the most successful creators of personal computer peripherals. For example, he designed memory extenders that enabled the computer to store more information, and insert able boards that allowed people to use different keyboards while sharing the same printer. After 1981, Tecmar produced an average of one new product per week.

Alpert had neither the technical training of Noyce nor the computer clubs of Jobs and Wozniak to encourage his interest in computer engineering. His parents were German refugees who worked in a factory and a bakery to pay for his college education. They insisted that he study medicine even though his interest was in electronics. Throughout medical school he studied electronics passionately but privately. He became a doctor, but practiced only part time while pursuing his preferred interest in electronics. His first electronics products were medical instruments that he built in his living room. His wife recognized the potential of his projects before he did, and enrolled in a graduate program in business management so she could run his electronics business successfully. Their annual sales reached $1 million, and they had 15 engineers working in their living room before they moved to a larger building in 1981. It wasn't until 1983 that Alpert stopped practicing medicine and gave his full attention to Tecmar. By 1984 Tecmar was valued at $150 million.

Computer technology has opened a variety of opportunities for people who are creative risk-takers. Those who have been successful have been alert technologically, creatively, and financially. They have known when to use the help of other people and when to work alone. Whereas some have been immediately successful, others have gone unrewarded for their creative and financial investments; some failure is inevitable in an environment as competitive as the Silicon Valley. Rarely in history have so many people been so motivated to create. Many of them have been rewarded greatly with fame and fortune, and the world has benefited from this frenzy of innovation.

Digital computers

There are two fundamentally different types of computers: analog and digital. The former type solver problems by using continuously changing data such as voltage. In current usage, the term "computer" usually refers to high-speed digital computers. These computers are playing an increasing role in all branches of the economy.

Digital computers based on manipulating discrete binary digits (1s and 0s). They are generally more effective than analog computers for four principal reasons: they are faster; they are not so susceptible to signal interference; they can transfer huge data bases more accurately; and their coded binary data are easier to store and retrieve than the analog signals.

For all their apparent complexity, digital computers are considered to be simple machines. Digital computers are able to recognize only two states in each of its millions of switches, "on" or "off", or high voltage or low voltage. By assigning binary numbers to there states, 1 for "on" and 0 for "off", and linking many switches together, a computer can represent any type of data from numbers to letters and musical notes. It is this process of recognizing signals that is known as digitization. The real power of a computer depends on the speed with which it checks switches per second. The more switches a computer checks in each cycle, the more data it can recognize at one time and the faster it can operate, each switch being called a binary digit or bit.

A digital computer is a complex system of four functionally different elements:

1) the central processing unit (CPU),

2) input devices,

3) memory-storage devices called disk drives,

4) output devices.

These physical parts and all their physical components are called hardware.

The power of computers greatly on the characteristics of memory-storage devices. Most digital computers store data both internally, in what is called main memory, and externally, on auxiliary storage units. As a computer processes data and instructions, it temporarily stores information internally on special memory microchips. Auxiliary storage units supplement the main memory when programmes are too large and they also offer a more reliable method for storing data. There exist different kinds of auxiliary storage devices, removable magnetic disks being the most widely used. They can store up to 100 megabytes of data on one disk, a byte being known as the basic unit of data storage.

Output devices let the user see the results of the computer's data processing. Being the most commonly used output device, the monitor accepts video signals from a computer and shows different kinds of information such as text, formulas and graphics on its screen. With the help of various printers information stored in one of the computer's memory systems can be easily printed on paper in a desired number of copies.

Programmes, also called software, are detailed sequences of instructions that direct the computer hardware to perform useful operations. Due to a computer's operating system hardware and software systems can work simultaneously. An operating system consists of a number of programmes coordinating operations, translating the data from different input and output devices, regulating data storage in memory, transferring tasks to different processors, and providing functions that help programmers to write software. In large corporations software is often written by groups of experienced programmers, each person focusing on a specific aspect of the total project. For this reason, scientific and industrial software sometimes costs much more than do the computers on which the programmes run.

Prehistory

Tools are any objects other than the parts of our own bodies that we use to help us do our work. Technology is nothing more than the use of tools. When you use a screwdriver, a hammer, or an axe, you are using technology just as much as when you use an automobile, a television set, or a computer.

We tend to think of technology as a human invention. But the reverse is closer to the truth. Stone tools found along with fossils show that our ape-like ancestors were already putting technology to use. Anthropologists speculate that using tools may have helped these creatures evolve into human beings; in a tool-using society, manual dexterity and intelligence count for more than brute strength. The clever rather than the strong inherited the earth.

Most of the tools we have invented have aided our bodies rather than our minds. These tools help us lift and move and cut and shape. Only quite recently, for the most part, have we developed tools to aid our minds as well.

The tools of communication, from pencil and paper to television, are designed to serve our minds. These devices transmit information or preserve it, but the do no modify it in any way (If the information is modified, this is considered a defect rather than a virtue, as when a defective radio distorts the music we're trying to hear.)

Our interest lies with machines that classify and modify information rather than merely transmitting it or preserving it. The machines that do this are the computers and the calculators, the so-called mind tools. The widespread use of machines for information processing is a modern development. But simple examples of information-processing machines can be traced back to ancient times. The following are some of the more important forerunners of the computer.

The Abacus. The abacus is the counting frame that was the most widely used device for doing arithmetic in ancient times and whose use persisted into modern times in the Orient. Early versions of the abacus consisted of a board with grooves I which pebbles could slide. The Latin word for pebbles is calculus, from which we get the words abacus and calculate.

Mechanical Calculators. In the seventeenth century, calculators more sophisticated than the abacus began to appear. Although a number of people contributed to their development, Blaise Pascal (French mathematician and philosopher) and Wilhelm von Leibniz (German mathematician, philosopher, and diplomat) usually are singled out as pioneers. The calculators Pascal and Leibniz built were unreliable, since the mechanical technology of the time was not capable of manufacturing the parts with sufficient precision. As manufacturing techniques improved, mechanical calculators eventually were perfected; they were used widely until they were replaced by electronic calculators in recent times.

The Jacquard Loom. Until modern times, most information-processing machines were designed to do arithmetic. An outstanding exception, however, was Jacquard's automated loom, a machine designed not for hard figures but beautiful patterns. A Jacquard loom weaves cloth containing a decorative patterns; the woven pattern is controlled by punched cards. Changing the punched cards changes the pattern the loom weaves. Jacquard loom came into widespread use in the early nineteenth century, and their descendants are still used today. The Jacquard loom is the ancestor not only of modern automated machine tools but of the player piano as well.

Questions:

1. What are tools?

2. What was the first tool?

3. What helped ape-like creatures evolve into human beings?

4. What is technology?

5. What tools of communication do you know?

6. What machines classify and modify information?

7. What do you know about Babbage, Pascal, Leibniz, and Jacquard?

The first hackers

The first "hackers" were students at the Massachusetts Institute of Technology (MIT) who belonged to the TMRC (Tech Model Railroad Club). Some of the members really built model trains. But many were more interested in the wires and circuits underneath the track platform. Spending hours at TMRC creating better circuitry was called "a mere hack." Those members who were interested in creating innovative, stylistic, and technically clever circuits called themselves (with pride) hackers.

During the spring of 1959, a new course was offered at MIT, a freshman programming class. Soon the hackers of the railroad club were spending days, hours, and nights hacking away at their computer, an IBM 704. Instead of creating a better circuit, their hack became creating faster, more efficient program - with the least number of lines of code. Eventually they formed a group and created the first set of hacker's rules, called the Hacker's Ethic.

Steven Levy, in his book Hackers, presented the rules:

Rule 1: Access to computers - and anything, which might teach you, something about the way the world works - should be unlimited and total.

Rule 2: All information should be free.

Rule 3: Mistrust authority - promote decentralization.

Rule 4: Hackers should be judged by their hacking, not bogus criteria such as degrees, race, or position.

Rule 5: You can create art and beauty on a computer.

Rule 6: Computers can change your life for the better.

These rules made programming at MIT's Artificial Intelligence Laboratory a challenging, all encompassing endeavor. Just for the exhilaration of programming, students in the Al Lab would write a new program to perform even the smallest tasks. The program would be made available to others who would try to perform the same task with fewer instructions. The act of making the computer work more elegantly was, to a bonafide hacker, awe-inspiring.

Hackers were given free reign on the computer by two AI Lab professors, "Uncle" John McCarthy and Marvin Minsky, who realized that hacking created new insights. Over the years, the AI Lab created many innovations: LIFE, a game about survival; LISP, a new kind of programming language; the first computer chess game; The CAVE, the first computer adventure; and SPACEWAR, the first video game.

Computer crimes

More and more, the operations of our businesses, governments, and financial institutions are controlled by information that exists only inside computer memories. Anyone clever enough to modify this information for his own purposes can reap substantial re wards. Even worse, a number of people who have done this and been caught at it have managed to get away without punishment.

These facts have not been lost on criminals or would-be criminals. A recent Stanford Research Institute study of computer abuse was based on 160 case histories, which probably are just the proverbial tip of the iceberg. After all, we only know about the unsuccessful crimes. How many successful ones have gone undetected is anybody's guess.

Here are a few areas in which computer criminals have found the pickings all too easy.

Banking. All but the smallest banks now keep their accounts on computer files. Someone who knows how to change the numbers in the files can transfer funds at will. For instance, one programmer was caught having the computer transfer funds from other people's accounts to his wife's checking account. Often, tradition ally trained auditors don't know enough about the workings of computers to catch what is taking place right under their noses.

Business. A company that uses computers extensively offers many opportunities to both dishonest employees and clever outsiders. For instance, a thief can have the computer ship the company's products to addresses of his own choosing. Or he can have it issue checks to him or his confederates for imaginary supplies or ser vices. People have been caught doing both.

Credit Cards. There is a trend toward using cards similar to credit cards to gain access to funds through cash-dispensing terminals. Yet, in the past, organized crime has used stolen or counterfeit credit cards to finance its operations. Banks that offer after-hours or remote banking through cash-dispensing terminals may find themselves unwillingly subsidizing organized crime.

Theft of Information. Much personal information about individuals is now stored in computer files. An unauthorized person with access to this information could use it for blackmail. Also, confidential information about a company's products or operations can be stolen and sold to unscrupulous competitors. (One attempt at the latter came to light when the competitor turned out to be scrupulous and turned in the people who were trying to sell him stolen information.)

Software Theft. The software for a computer system is often more expensive than the hardware. Yet this expensive software is all too easy to copy. Crooked computer experts have devised a variety of tricks for getting these expensive programs printed out, punched on cards, recorded on tape, or otherwise delivered into their hands. This crime has even been perpetrated from remote terminals that access the computer over the telephone.

Theft of Time-Sharing Services. When the public is given access to a system, some members of the public often discover how to use the system in unauthorized ways. For example, there are the "phone freakers" who avoid long distance telephone charges by sending over their phones control signals that are identical to those used by the telephone company.

Since time-sharing systems often are accessible to anyone who dials the right telephone number, they are subject to the same kinds of manipulation.

Of course, most systems use account numbers and passwords to restrict access to authorized users. But unauthorized persons have proved to be adept at obtaining this information and using it for their own benefit. For instance, when a police computer system was demonstrated to a school class, a precocious student noted the access codes being used; later, all the student's teachers turned up on a list of wanted criminals.

Perfect Crimes. It's easy for computer crimes to go undetected if no one checks up on what the computer is doing. But even if the crime is detected, the criminal may walk away not only unpunished but with a glowing recommendation from his former employers.

Of course, we have no statistics on crimes that go undetected. But it's unsettling to note how many of the crimes we do know about were detected by accident, not by systematic audits or other security procedures. The computer criminals who have been caught may have been the victims of uncommonly bad luck.

For example, a certain keypunch operator complained of having to stay overtime to punch extra cards. Investigation revealed that the extra cards she was being asked to punch were for fraudulent transactions. In another case, disgruntled employees of the thief tipped off the company that was being robbed. An undercover narcotics agent stumbled on still another case. An employee was selling the company's merchandise on the side and using the computer to get it shipped to the buyers. While negotiating for LSD, the narcotics agent was offered a good deal on a stereo!

Unlike other embezzlers, who must leave the country, commit suicide, or go to jail, computer criminals sometimes brazen it out, demanding not only that they not be prosecuted but also that they be given good recommendations and perhaps other benefits, such as severance pay. All too often, their demands have been met.

Why? Because company executives are afraid of the bad publicity that would result if the public found out that their computer had been misused. They cringe at the thought of a criminal boasting in open court of how he juggled the most confidential records right under the noses of the company's executives, accountants, and security staff. And so another computer criminal departs with just the recommendations he needs to continue his exploits elsewhere.

PART IV


Понравилась статья? Добавь ее в закладку (CTRL+D) и не забудь поделиться с друзьями:  



double arrow
Сейчас читают про: