Jumat, 29 Mei 2009

Technology adoption lifecycle

The technology adoption lifecycle is a sociological model developed by Joe M. Bohlen, George M. Beal and Everett M. Rogers at Iowa State College,[1] building on earlier research conducted there by Neal C. Gross and Bryce Ryan.[2][3][4] Their original purpose was to track the purchase patterns of hybrid seed corn by farmers.

Beal, Rogers and Bohlen together developed a technology diffusion model[5] and later Everett Rogers generalized the use of it in his widely acclaimed book, Diffusion of Innovations[6] (now in its fifth edition), describing how new ideas and technologies spread in different cultures. Others have since used the model to describe how innovations spread between states in the U.S.[7]

Rogers' bell curve

The technology adoption lifecycle model describes the adoption or acceptance of a new product or innovation, according to the demographic and psychological characteristics of defined adopter groups. The process of adoption over time is typically illustrated as a classical normal distribution or "bell curve." The model indicates that the first group of people to use a new product is called "innovators," followed by "early adopters." Next come the early and late majority, and the last group to eventually adopt a product are called "laggards."

The demographic and psychological (or "psychographic") profiles of each adoption group were originally specified by the North Central Rural Sociology Committee, Subcommittee for the Study of the Diffusion of Farm Practices (as cited by Beal and Bohlen in their study above).

The report summarised the categories as:

  • innovators - had larger farms, were more educated, more prosperous and more risk-oriented
  • early adopters - younger, more educated, tended to be community leaders
  • early majority - more conservative but open to new ideas, active in community and influence to neighbours
  • late majority - older, less educated, fairly conservative and less socially active
  • laggards - very conservative, had small farms and capital, oldest and least educated

Adaptations of the model

The model has spawned a range of adaptations that extend the concept or apply it to specific domains of interest.

In this book, Crossing the Chasm, Geoffrey Moore proposes a variation of the original lifecycle. He suggests that for discontinuous or disruptive innovations, there is a gap or chasm between the first two adopter groups (innovators/early adopters), and the early majority.

In Educational technology, Lindy McKeown has provided a similar model (a pencil metaphor) describing the ICT uptake in education.


Examples

One way to model product adoption[9] is to understand that people's behaviors are influenced by their peers and how widespread they think a particular action is. For many format-dependent technologies, people have a non-zero payoff for adopting the same technology as their closest friends or colleagues. If two users both adopt product A, they might get a payoff a > 0; if they adopt product B, they get b > 0. But if one adopts A and the other adopts B, they both get a payoff of 0.

We can set a threshold for each user to adopt a product. Say that a node v in a graph has d neighbors: then v will adopt product A if a fraction p of its neighbors is greater than or equal to some threshold. For example, if v's threshold is 2/3, and only one of its two neighbors adopts product A, then v will not adopt A. Using this model, we can deterministically model product adoption on sample networks.

Technology lifecycle

Most new technologies follow a similar technology maturity lifecycle describing the technological maturity of a product. This is not similar to a product life cycle, but applies to an entire technology, or a generation of a technology.

Technology adoption is the most common phenomenon driving the evolution of industries along the industry lifecycle. After expanding new uses of resources they end with exhausting the efficiency of those processes, producing gains that are first easier and larger over time then exhaustingly more difficult, as the technology matures.

Technology perception dynamics

There is usually technology hype at the introduction of any new technology, but only after some time has passed can it be judged as mere hype or justified true acclaim. Because of the logistic curve nature of technology adoption, it is difficult to see in the early stages whether the hype is excessive.

The two errors commonly committed in the early stages of a technology's development are[citation needed]:

  • fitting an exponential curve to the first part of the growth curve, and assuming eternal exponential growth
  • fitting a linear curve to the first part of the growth curve, and assuming that takeup of the new technology is disappointing
Rogers' bell curve

Similarly, in the later stages, the opposite mistakes can be made relating to the possibilities of technology maturity and market saturation.

Technology adoption typically occurs in an S curve, as modelled in diffusion of innovations theory. This is because customers respond to new products in different ways. Diffusion of innovations theory, pioneered by Everett Rogers, posits that people have different levels of readiness for adopting new innovations and that the characteristics of a product affect overall adoption. Rogers classified individuals into five groups: innovators, early adopters, early majority, late majority, and laggards. In terms of the S curve, innovators occupy 2.5%, early adopters 13.5%, early majority 34%, late majority 34%, and laggards 16%.

Stages

From a layman's perspective, the technological maturity can be broken down into five distinct stages.

  1. Bleeding edge - any technology that shows high potential but hasn't demonstrated its value or settled down into any kind of consensus. Early adopters may win big, or may be stuck with a white elephant.
  2. Leading edge - a technology that has proven itself in the marketplace but is still new enough that it may be difficult to find knowledgeable personnel to implement or support it.
  3. State of the art - when everyone agrees that a particular technology is the right solution.
  4. Dated - still useful, still sometimes implemented, but a replacement leading edge technology is readily available.
  5. Obsolete - has been superseded by state-of-the-art technology, maintained but no longer implemented.

Technology acceptance model

The Technology Acceptance Model (TAM) is an information systems theory that models how users come to accept and use a technology. The model suggests that when users are presented with a new technology, a number of factors influence their decision about how and when they will use it, notably:

  • Perceived usefulness (PU) - This was defined by Fred Davis as "the degree to which a person believes that using a particular system would enhance his or her job performance".
  • Perceived ease-of-use (PEOU) - Davis defined this as "the degree to which a person believes that using a particular system would be free from effort" (Davis, 1989).

History

TAM is one of the most influential extensions of Ajzen and Fishbein’s theory of reasoned action (TRA) in the literature. It was developed by Fred Davis and Richard Bagozzi (Bagozzi et al., 1992; Davis et al., 1989). TAM replaces many of TRA’s attitude measures with the two technology acceptance measures— ease of use, and usefulness. TRA and TAM, both of which have strong behavioural elements, assume that when someone forms an intention to act, that they will be free to act without limitation. In the real world there will be many constraints, such as limit the freedom to act (Bagozzi et al., 1992).

Bagozzi, Davis and Warshaw say:

Because new technologies such as personal computers are complex and an element of uncertainty exists in the minds of decision makers with respect to the successful adoption of them, people form attitudes and intentions toward trying to learn to use the new technology prior to initiating efforts directed at using. Attitudes towards usage and intentions to use may be ill-formed or lacking in conviction or else may occur only after preliminary strivings to learn to use the technology evolve. Thus, actual usage may not be a direct or immediate consequence of such attitudes and intentions. (Bagozzi et al., 1992)

Earlier research on the diffusion of innovations also suggested a prominent role for perceived ease of use. Tornatzky and Klein (1982) analysed the adoption, finding that compatibility, relative advantage, and complexity had the most significant relationships with adoption across a broad range of innovation types. Eason studied perceived usefulness in terms of a fit between systems, tasks and job profiles, using the terms "task fit" to describe the metric (quoted in Stewart, 1986)

Usage

Several researchers have replicated Davis’s original study (Davis, 1989) to provide empirical evidence on the relationships that exist between usefulness, ease of use and system use (Adams, Nelson & Todd, 1992; Davis et al., 1989; Hendrickson, Massey & Cronan, 1993; Segars & Grover, 1993; Subramanian, 1994; Szajna, 1994). Much attention has focused on testing the robustness and validity of the questionnaire instrument used by Davis. Adams et al. (1992) replicated the work of Davis (1989) to demonstrate the validity and reliability of his instrument and his measurement scales. They also extended it to different settings and, using two different samples, they demonstrated the internal consistency and replication reliability of the two scales. Hendrickson et al. (1993) found high reliability and good test-retest reliability. Szajna (1994) found that the instrument had predictive validity for intent to use, self-reported usage and attitude toward use. The sum of this research has confirmed the validity of the Davis instrument, and to support its use with different populations of users and different software choices.

Segars and Grover (1993) re-examined Adams et al.’s (1992) replication of the Davis work. They were critical of the measurement model used, and postulated a different model based on three constructs: usefulness, effectiveness, and ease-of-use. These findings do not yet seem to have been replicated.

Mark Keil and his colleagues have developed (or, perhaps rendered more popularisable) Davis’s model into what they call the Usefulness/EOU Grid, which is a 2×2 grid where each quadrant represents a different combination of the two attributes. In the context of software use, this provides a mechanism for discussing the current mix of usefulness and EOU for particular software packages, and for plotting a different course if a different mix is desired, such as the introduction of even more powerful software (Keil, Beranek & Konsynski, 1995).

Criticisms of TAM as a "theory" include its lack of falsifiability, questionable heuristic value, limited explanatory and predictive power, triviality, and lack of any practical value. [Please add source citations].

Venkatesh and Davis extended the original TAM model to explain perceived usefulness and usage intentions in terms of social influence and cognitive instrumental processes. The extended model, referred to as TAM2, was tested in both voluntary and mandatory settings. The results strongly supported TAM2 (Venkatesh and Davis, 2000).

In an attempt to integrate the main competing user acceptance models, Venkatesh et al. formulated the Unified Theory of Acceptance and Use of Technology (UTAUT). This model was found to outperform each of the individual models (Adjusted R square of 69 percent) (Venkatesh et al., 2003).

For a recent analysis and critique of TAM see Bagozzi (2007).

Independent of TAM, Scherer developed the Matching Person & Technology Model in 1986 as part of her National Science Foundation-funded dissertation research. The MPT Model is fully described in her 1993 text, "Living in the State of Stuck," now in its 4th edition. The MPT Model has accompanying assessment measures used in technology selection and decision-making, as well as outcomes research on differences among technology users, non-users, avoiders, and partical/reluctant users.


Technology (album)

Technology is the first album of the Melodic death metal band Crimson Death. It was recorded in 2001, but due to financial problems of the record label it was released in 2004 by Mythic Metal Productions.
Technology
Technology cover
Studio album by Crimson Death
Released 2004
Recorded February 2001
Genre Melodic death metal
Length 57:41
Label Mythic Metal Productions
Producer John Capcha & Crimson Death
Crimson Death chronology
Promo EP
(2000)
Technology
(2004)
Death Is Essential
(2006)

Tracklisting

  1. The end of the novel - 04:49
  2. Convicts to the extinction - 04:24
  3. Everhate - 04:40
  4. My last word - 04:59
  5. The tower - 05:52
  6. Technology - 05:13
  7. Messengers of sadness - 05:32
  8. Vanity paradise - 04:57
  9. Song of the black river - 06:27
  10. Everhate [Promo Version] - 04:42
  11. Song of the black river [Promo Version] - 06:12

Musicians


Production information

  • Recorded at Session Studio on February 2001
  • Produced by John Capcha & Crimson Death
  • Executive producer Martin Espíritu R.
  • Engineered by John Capcha
  • Art concept by Carlos Delgado
  • Art work by Carlos Degaldo and Edgar Rodríguez
  • Crimson Death icon by Jose Melo
  • Photography by Herbert Añari

Technologic systems

Technologic systems is an American company producing single board computers for embedded systems. They come with either x86 or ARM9 processors. The company states that the major advantage of its boards is very short boot time (less than one second). Some boards also have sleeping modes with very low power consumption (200 μA). These embedded computers run the Linux 2.6 kernel with a full Debian Linux distribution. Eclipse is available for some boards for developing embedded applications in C, C++ or Java.

Most of the boards have USB and RJ45 Ethernet connectors, SD and other card slots, and some also have SATA ports.

The company also offers boards, enclosures and various ready to use "application kits", including configurations with Wireless LAN capabilities.

Type Private
Founded 1988
Headquarters Fountain Hills, Arizona
Industry Computer systems
Products single board computers
Website www.embeddedarm.com

Biotechnology

Biotechnology is technology based on biology, especially when used in agriculture, food science, and medicine. United Nations Convention on Biological Diversity defines biotechnology as:

Any technological application that uses biological systems, dead organisms, or derivatives thereof, to make or modify products or processes for specific use.

Biotechnology is often used to refer to genetic engineering technology of the 21st century, however the term encompasses a wider range and history of procedures for modifying biological organisms according to the needs of humanity, going back to the initial modifications of native plants into improved food crops through artificial selection and hybridization. Bioengineering is the science upon which all biotechnological applications are based. With the development of new approaches and modern techniques, traditional biotechnology industries are also acquiring new horizons enabling them to improve the quality of their products and increase the productivity of their systems.

Insulin crystals.

Before 1971, the term, biotechnology, was primarily used in the agriculture and agriculture industries. Since the 1970s, it began to be used by the Western scientific establishment to refer to laboratory-based techniques being developed in biological research, such as recombinant DNA or tissue culture-based processes, or horizontal gene transfer in living plants, using vectors such as the Agrobacterium bacteria to transfer DNA into a host organism. In fact, the term should be used in a much broader sense to describe the whole range of methods, both ancient and modern, used to manipulate organic materials to reach the demands of food production. So the term could be defined as, "The application of indigenous and/or scientific knowledge to the management of (parts of) microorganisms, or of cells and tissues of higher organisms, so that these supply goods and services of use to the food industry and its consumers.[2]

Biotechnology combines disciplines like genetics, molecular biology, biochemistry, embryology, and cell biology, which are in turn linked to practical disciplines like chemical engineering, information technology, and biorobotics. Patho-biotechnology describes the exploitation of pathogens or pathogen derived compounds for beneficial effect.

History

Brewing was an early application of biotechnology

Although not normally thought of as biotechnology, agriculture clearly fits the broad definition of "using a biological system to make products" such that the cultivation of plants may be viewed as the earliest biotechnological enterprise. Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. The processes and methods of agriculture have been refined by other mechanical and biological sciences since its inception. Through early biotechnology, farmers were able to select the best suited and highest-yield crops to produce enough food to support a growing population. Other uses of biotechnology were required as crops and fields became increasingly large and difficult to maintain. Specific organisms and organism by-products were used to fertilize, restore nitrogen, and control pests. Throughout the use of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants—one of the first forms of biotechnology. Cultures such as those in Mesopotamia, Egypt, and India developed the process of brewing beer. It is still done by the same basic method of using malted grains (containing enzymes) to convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process the carbohydrates in the grains were broken down into alcohols such as ethanol. Ancient Indians also used the juices of the plant Ephedra vulgaris and used to call it Soma. Later other cultures produced the process of Lactic acid fermentation which allowed the fermentation and preservation of other forms of food. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur’s work in 1857, it is still the first use of biotechnology to convert a food source into another form.

Combinations of plants and other organisms were used as medications in many early civilizations. Since as early as 200 BC, people began to use disabled or minute amounts of infectious agents to immunize themselves against infections. These and similar processes have been refined in modern medicine and have led to many developments such as antibiotics, vaccines, and other methods of fighting sickness.

In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.

The field of modern biotechnology is thought to have largely begun on June 16, 1980, when the United States Supreme Court ruled that a genetically-modified microorganism could be patented in the case of Diamond v. Chakrabarty.Indian-born Ananda Chakrabarty, working for General Electric, had developed a bacterium (derived from the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating oil spills.

Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products to cope with an ageing, and ailing, U.S. population.

Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically-modified seeds which are resistant to pests and drought. By boosting farm productivity, biotechnology plays a crucial role in ensuring that biofuel production targets are met.


Applications

A rose plant that began as cells grown in a tissue culture

Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non food (industrial) uses of crops and other products (e.g. biodegradable plastics, vegetable oil, biofuels), and environmental uses.

For example, one application of biotechnology is the directed use of organisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.

A series of derived terms have been coined to identify several branches of biotechnology, for example:

  • Bioinformatics is an interdisciplinary field which addresses biological problems using computational techniques, and makes the rapid organization and analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale."[7] Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.
  • Blue biotechnology is a term that has been used to describe the marine and aquatic applications of biotechnology, but its use is relatively rare.
  • Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environmental conditions or in the presence (or absence) of certain agricultural chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby eliminating the need for external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate.
  • Red biotechnology is applied to medical processes. Some examples are the designing of organisms to produce antibiotics, and the engineering of genetic cures through genomic manipulation.
  • White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods.
  • The investments and economic output of all of these types of applied biotechnologies form what has been described as the bioeconomy.

Medicine

In medicine, modern biotechnology finds promising applications in such areas as


Pharmacogenomics

DNA Microarray chip -- Some can do as many as a million blood tests at once

Pharmacogenomics is the study of how the genetic inheritance of an individual affects his/her body’s response to drugs. It is a coined word derived from the words “pharmacology” and “genomics”. It is hence the study of the relationship between pharmaceuticals and genetics. The vision of pharmacogenomics is to be able to design and produce drugs that are adapted to each person’s genetic makeup.

Pharmacogenomics results in the following benefits

  1. Development of tailor-made medicines. Using pharmacogenomics, pharmaceutical companies can create drugs based on the proteins, enzymes and RNA molecules that are associated with specific genes and diseases. These tailor-made drugs promise not only to maximize therapeutic effects but also to decrease damage to nearby healthy cells.
  2. More accurate methods of determining appropriate drug dosages. Knowing a patient’s genetics will enable doctors to determine how well his/ her body can process and metabolize a medicine. This will maximize the value of the medicine and decrease the likelihood of overdose.
  3. Improvements in the drug discovery and approval process. The discovery of potential therapies will be made easier using genome targets. Genes have been associated with numerous diseases and disorders. With modern biotechnology, these genes can be used as targets for the development of effective new therapies, which could significantly shorten the drug discovery process.
  4. Better vaccines. Safer vaccines can be designed and produced by organisms transformed by means of genetic engineering. These vaccines will elicit the immune response without the attendant risks of infection. They will be inexpensive, stable, easy to store, and capable of being engineered to carry several strains of pathogen at once.

Pharmaceutical products

Computer-generated image of insulin hexamers highlighting the threefold symmetry, the zinc ions holding it together, and the histidine residues involved in zinc binding.

Most traditional pharmaceutical drugs are relatively simple molecules that have been found primarily through trial and error to treat the symptoms of a disease or illness. Biopharmaceuticals are large biological molecules known as proteins and these usually target the underlying mechanisms and pathways of a malady (but not always, as is the case with using insulin to treat type 1 diabetes mellitus, as that treatment merely addresses the symptoms of the disease, not the underlying cause which is autoimmunity); it is a relatively young industry. They can deal with targets in humans that may not be accessible with traditional medicines. A patient typically is dosed with a small molecule via a tablet while a large molecule is typically injected.

Small molecules are manufactured by chemistry but larger molecules are created by living cells such as those found in the human body: for example, bacteria cells, yeast cells, animal or plant cells.

Modern biotechnology is often associated with the use of genetically altered microorganisms such as E. coli or yeast for the production of substances like synthetic insulin or antibiotics. It can also refer to transgenic animals or transgenic plants, such as Bt corn. Genetically altered mammalian cells, such as Chinese Hamster Ovary (CHO) cells, are also used to manufacture certain pharmaceuticals. Another promising new biotechnology application is the development of plant-made pharmaceuticals.

Biotechnology is also commonly associated with landmark breakthroughs in new medical therapies to treat hepatitis B, hepatitis C, cancers, arthritis, haemophilia, bone fractures, multiple sclerosis, and cardiovascular disorders. The biotechnology industry has also been instrumental in developing molecular diagnostic devices that can be used to define the target patient population for a given biopharmaceutical. Herceptin, for example, was the first drug approved for use with a matching diagnostic test and is used to treat breast cancer in women whose cancer cells express the protein HER2.

Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle and/or pigs). The resulting genetically engineered bacterium enabled the production of vast quantities of synthetic human insulin at relatively low cost, although the cost savings was used to increase profits for manufacturers, not passed on to consumers or their healthcare providers. According to a 2003 study undertaken by the International Diabetes Federation (IDF) on the access to and availability of insulin in its member countries, synthetic 'human' insulin is considerably more expensive in most countries where both synthetic 'human' and animal insulin are commercially available: e.g. within European countries the average price of synthetic 'human' insulin was twice as high as the price of pork insulin. Yet in its position statement, the IDF writes that "there is no overwhelming evidence to prefer one species of insulin over another" and "[modern, highly-purified] animal insulins remain a perfectly acceptable alternative.

Modern biotechnology has evolved, making it possible to produce more easily and relatively cheaply human growth hormone, clotting factors for hemophiliacs, fertility drugs, erythropoietin and other drugs.Most drugs today are based on about 500 molecular targets. Genomic knowledge of the genes involved in diseases, disease pathways, and drug-response sites are expected to lead to the discovery of thousands more new targets

Genetic testing

Genetic testing involves the direct examination of the DNA molecule itself. A scientist scans a patient’s DNA sample for mutated sequences.

There are two major types of gene tests. In the first type, a researcher may design short pieces of DNA (“probes”) whose sequences are complementary to the mutated sequences. These probes will seek their complement among the base pairs of an individual’s genome. If the mutated sequence is present in the patient’s genome, the probe will bind to it and flag the mutation. In the second type, a researcher may conduct the gene test by comparing the sequence of DNA bases in a patient’s gene to disease in healthy individuals or their progeny.

Genetic testing is now used for:

  • Carrier screening, or the identification of unaffected individuals who carry one copy of a gene for a disease that requires two copies for the disease to manifest;
  • Confirmational diagnosis of symptomatic individuals;
  • Determining sex;
  • Forensic/identity testing;
  • Newborn screening;
  • Prenatal diagnostic screening;
  • Presymptomatic testing for estimating the risk of developing adult-onset cancers;
  • Presymptomatic testing for predicting adult-onset disorders.

Some genetic tests are already available, although most of them are used in developed countries. The tests currently available can detect mutations associated with rare genetic disorders like cystic fibrosis, sickle cell anemia, and Huntington’s disease. Recently, tests have been developed to detect mutation for a handful of more complex conditions such as breast, ovarian, and colon cancers. However, gene tests may not detect every mutation associated with a particular condition because many are as yet undiscovered, and the ones they do detect may present different risks to different people and populations.

Controversial questions
The bacterium C Villos lada is routinely genetically engineered.

Several issues have been raised regarding the use of genetic testing:

  1. Absence of cure. There is still a lack of effective treatment or preventive measures for many diseases and conditions now being diagnosed or predicted using gene tests. Thus, revealing information about risk of a future disease that has no existing cure presents an ethical dilemma for medical practitioners.
  2. Ownership and control of genetic information. Who will own and control genetic information, or information about genes, gene products, or inherited characteristics derived from an individual or a group of people like indigenous communities? At the macro level, there is a possibility of a genetic divide, with developing countries that do not have access to medical applications of biotechnology being deprived of benefits accruing from products derived from genes obtained from their own people. Moreover, genetic information can pose a risk for minority population groups as it can lead to group stigmatization.

At the individual level, the absence of privacy and anti-discrimination legal protections in most countries can lead to discrimination in employment or insurance or other misuse of personal genetic information. This raises questions such as whether genetic privacy is different from medical privacy.[13]

  1. Reproductive issues. These include the use of genetic information in reproductive decision-making and the possibility of genetically altering reproductive cells that may be passed on to future generations. For example, germline therapy forever changes the genetic make-up of an individual’s descendants. Thus, any error in technology or judgment may have far-reaching consequences. Ethical issues like designer babies and human cloning have also given rise to controversies between and among scientists and bioethicists, especially in the light of past abuses with eugenics.
  2. Clinical issues. These center on the capabilities and limitations of doctors and other health-service providers, people identified with genetic conditions, and the general public in dealing with genetic information.
  3. Effects on social institutions. Genetic tests reveal information about individuals and their families. Thus, test results can affect the dynamics within social institutions, particularly the family.
  4. Conceptual and philosophical implications regarding human responsibility, free will vis-à-vis genetic determinism, and the concepts of health and disease.

Gene therapy

Gene therapy using an Adenovirus vector. A new gene is inserted into an adenovirus vector, which is used to introduce the modified DNA into a human cell. If the treatment is successful, the new gene will make a functional protein.

Gene therapy may be used for treating, or even curing, genetic and acquired diseases like cancer and AIDS by using normal genes to supplement or replace defective genes or to bolster a normal function such as immunity. It can be used to target somatic (i.e., body) or gametes (i.e., egg and sperm) cells. In somatic gene therapy, the genome of the recipient is changed, but this change is not passed along to the next generation. In contrast, in germline gene therapy, the egg and sperm cells of the parents are changed for the purpose of passing on the changes to their offspring.

There are basically two ways of implementing a gene therapy treatment:

  1. Ex vivo, which means “outside the body” – Cells from the patient’s blood or bone marrow are removed and grown in the laboratory. They are then exposed to a virus carrying the desired gene. The virus enters the cells, and the desired gene becomes part of the DNA of the cells. The cells are allowed to grow in the laboratory before being returned to the patient by injection into a vein.
  2. In vivo, which means “inside the body” – No cells are removed from the patient’s body. Instead, vectors are used to deliver the desired gene to cells in the patient’s body.

Currently, the use of gene therapy is limited. Somatic gene therapy is primarily at the experimental stage. Germline therapy is the subject of much discussion but it is not being actively investigated in larger animals and human beings.

As of June 2001, more than 500 clinical gene-therapy trials involving about 3,500 patients have been identified worldwide. Around 78% of these are in the United States, with Europe having 18%. These trials focus on various types of cancer, although other multigenic diseases are being studied as well. Recently, two children born with severe combined immunodeficiency disorder (“SCID”) were reported to have been cured after being given genetically engineered cells.

Gene therapy faces many obstacles before it can become a practical approach for treating disease.[14] At least four of these obstacles are as follows:

  1. Gene delivery tools. Genes are inserted into the body using gene carriers called vectors. The most common vectors now are viruses, which have evolved a way of encapsulating and delivering their genes to human cells in a pathogenic manner. Scientists manipulate the genome of the virus by removing the disease-causing genes and inserting the therapeutic genes. However, while viruses are effective, they can introduce problems like toxicity, immune and inflammatory responses, and gene control and targeting issues. In addition, in order for gene therapy to provide permanent therapeutic effects, the introduced gene needs to be integrated within the host cell's genome. Some viral vectors effect this in a random fashion, which can introduce other problems such as disruption of an endogenous host gene.
  2. High costs. Since gene therapy is relatively new and at an experimental stage, it is an expensive treatment to undertake. This explains why current studies are focused on illnesses commonly found in developed countries, where more people can afford to pay for treatment. It may take decades before developing countries can take advantage of this technology.
  3. Limited knowledge of the functions of genes. Scientists currently know the functions of only a few genes. Hence, gene therapy can address only some genes that cause a particular disease. Worse, it is not known exactly whether genes have more than one function, which creates uncertainty as to whether replacing such genes is indeed desirable.
  4. Multigene disorders and effect of environment. Most genetic disorders involve more than one gene. Moreover, most diseases involve the interaction of several genes and the environment. For example, many people with cancer not only inherit the disease gene for the disorder, but may have also failed to inherit specific tumor suppressor genes. Diet, exercise, smoking and other environmental factors may have also contributed to their disease.

Human Genome Project

DNA Replication image from the Human Genome Project (HGP)

The Human Genome Project is an initiative of the U.S. Department of Energy (“DOE”) that aims to generate a high-quality reference sequence for the entire human genome and identify all the human genes.

The DOE and its predecessor agencies were assigned by the U.S. Congress to develop new energy resources and technologies and to pursue a deeper understanding of potential health and environmental risks posed by their production and use. In 1986, the DOE announced its Human Genome Initiative. Shortly thereafter, the DOE and National Institutes of Health developed a plan for a joint Human Genome Project (“HGP”), which officially began in 1990.

The HGP was originally planned to last 15 years. However, rapid technological advances and worldwide participation accelerated the completion date to 2003 (making it a 13 year project). Already it has enabled gene hunters to pinpoint genes associated with more than 30 disorders.




Thin film transistor liquid crystal display

A thin film transistor liquid crystal display (TFT-LCD) is a variant of liquid crystal display (LCD) which uses thin film transistor (TFT) technology to improve image quality (e.g. addressability, contrast). TFT LCD is one type of active matrix LCD, though all LCD-screens are based on TFT active matrix addressing. TFT LCDs are used in television sets, computer monitors, mobile phones and computers, handheld video game systems, personal digital assistants, navigation systems, projectors, etc.
A flat panel computer display

Construction

A diagram of the pixel layout

Small liquid crystal displays as used in calculators and other devices have direct driven image elements—a voltage can be applied across one segment without interfering with other segments of the display. This is impractical for a large display with a large number of picture elements (pixels), since it would require millions of connections—top and bottom connections for each one of the three colors (red, green and blue) of every pixel. To avoid this issue, the pixels are addressed in rows and columns which reduce the connection count from millions to thousands. If all the pixels in one row are driven with a positive voltage and all the pixels in one column are driven with a negative voltage, then the pixel at the intersection has the largest applied voltage and is switched. The problem with this solution is that all the pixels in the same column see a fraction of the applied voltage as do all the pixels in the same row, so although they are not switched completely, they do tend to darken. The solution to the problem is to supply each pixel with its own transistor switch which allows each pixel to be individually controlled. The low leakage current of the transistor prevents the voltage applied to the pixel from leaking away between refreshes to the display image. Each pixel is a small capacitor with a layer of insulating liquid crystal sandwiched between transparent conductive ITO layers.

The circuit layout of a TFT-LCD is very similar to that of a DRAM memory. However, rather than fabricating the transistors from silicon formed into a crystalline wafer, they are made from a thin film of silicon deposited on a glass panel. Transistors take up only a small fraction of the area of each pixel; the rest of the silicon film is etched away to allow light to pass through.

The silicon layer for TFT-LCDs is typically deposited using the PECVD process from a silane gas precursor to produce an amorphous silicon film. Polycrystalline silicon (frequently LTPS, low-temperature poly-Si) is sometimes used in displays requiring higher TFT performance. Examples include high-resolution displays, high-frequency displays or displays where performing some data processing on the display itself is desirable. Amorphous silicon-based TFTs have the lowest performance, polycrystalline silicon TFTs have higher performance (notably mobility), and single-crystal silicon transistors are the best performers.

IPS (in-plane switching) was developed by Hitachi in 1996 to improve on the poor viewing angles and color reproduction of TN panels. Though color reproduction approaches that of CRTs, the dynamic range is lower but this was improved over the years. Fringe Field Switching is a technique used to improve viewing angle and transmittance on IPS displays.[3] IPS technology is widely used in panel sizes of monitor 20"~30" and LCD TV 17"~52".

Hitachi IPS evolving technology [4]
Name Nickname Year Advantage Transmittance /
Contrast ratio
Remarks
Super TFT IPS 1996 Wide viewing angle 100 / 100
Base level
Most panels also support true 8-bit per channel color. These improvements came at the cost of a slower response time, initially about 50ms. IPS panels were also extremely expensive.
Super-IPS S-IPS 1998 Color shift free 100 / 137 IPS has since been superseded by S-IPS (Super-IPS, Hitachi in 1998), which has all the benefits of IPS technology with the addition of improved pixel refresh timing.
Advanced Super-IPS AS-IPS 2002 High transmittance 130 / 250 AS-IPS, also developed by Hitachi in 2002, improves substantially on the contrast ratio of traditional S-IPS panels to the point where they are second only to some S-PVAs.
IPS-Provectus IPS-Pro 2004- High contrast ratio 137 / 313 Currently the latest panel from IPS Alpha Technology where contrast ratio are able to match PVA and ASV respectively, no glowing at the angle and wider color gamut. Matsushita will become the major shareholder after acquire Hitachi displays as of Mar 31,09. [5]
LG IPS evolving technology
Name Nickname Year Remarks
Super-IPS S-IPS 2001 LG.Philips remain as one of the main manufacturers of S-IPS based panels based on Hitachi Super-IPS.
Advanced Super-IPS AS-IPS 2005 Increasing contrast ratio with better color gamut.
Horizontal IPS H-IPS 2007 It improves the contrast ratio by twisting the electrode plane layout. The H-IPS panel is used in the NEC LCD2490WUXi and LCD2690WUXi, Mitsubishi RDT261W, HP LP2475w, Planar PX2611W,[6] and Apple's newest Aluminum 24" iMac. H-IPS up close.
  • Introduce a optional customize Advanced True White (A-TW) polarizing film for NEC, which result a TW (True White) color to make white look more natural. This is used in professional/photography LCDs. One such monitor to use this technology is the NEC LCD2690WUXi.
Enhanced IPS E-IPS 2009 A low-cost AS-IPS display by lowering the aperture ratio to increased transmittance which result some glowing at the angle, lower color gamut and contrast ratio.

ASV

ASV (Advanced Super View), also called Axially Symmetric Vertical Alignment was developed by Sharp, it is a VA mode where LC molecules orient perpendicular to the substrates in the off state. The bottom sub-pixel has continuously covered electrodes, while the upper one has a smaller area electrode in the center of the subpixel.

When the field is on, the LC molecules start to tilt towards the center of the sub-pixels because of the electric field; As a result, a continuous pinwheel alignment (CPA) is formed; the azimuthal angle rotates 360 degrees continuously resulting in a excellent viewing angle. The ASV mode is also called CPA mode.

Electrical interface

External consumer display devices like a TFT LCD mostly use an analog VGA connection, while newer, more expensive models mostly feature a digital interface like DVI, HDMI, or DisplayPort. Inside external display devices there is a controller board that will convert CVBS, VGA, DVI, HDMI etc. Into digital RGB at the native resolution of the display panel. In a laptop the graphics chip will directly produce a signal suitable for connection to the built-in TFT display. A control mechanism for the backlight is usually included on the same controller board.

The low level interface of STN, DSTN, or TFT display panels use either single ended TTL 5V signal for older displays or TTL 3.3V for slightly newer displays that transmits Pixel clock, Horizontal sync, Vertical sync, Digital red, Digital green, Digital blue in parallel. Some models also feature input/display enable, horizontal scan direction and vertical scan direction signals.

New and large (>15") TFT displays often use LVDS or TMDS signaling that transmits the same contents as the parallel interface (Hsync, Vsync, RGB) but will put control and RGB bits into a number of serial transmission lines synchronized to a clock at 1/3 of the data bitrate. Usually with 3 data signals and one clock line. Transmitting 3x7 bits for one clock cycle giving 18-bpp. An optional 4th signal enables 24-bpp.

Backlight intensity is usually controlled by varying a few volts DC, or generating a PWM signal, adjusting a potentiometer or simple fixed. This in turn controls an high-voltage (1,3 kV) DC-AC inverter or an matrix of LEDs.

The bare display panel will only accept a digital video signal at the resolution determined by the panel pixel matrix designed at manufacture. Some screen panels will ignore colour LSB bits to present a consistent interface (8bit->6bit/colour).

The reason why laptop displays can't be reused directly with an ordinary computer graphics card or as a television, is mainly because it lacks a hardware rescaler (often using some discrete cosine transform) that can resize the image to fit the native resolution of the display panel. With analogue signals like VGA the display controller also needs to perform a highspeed analog to digital conversion. With digital input signals like DVI or HDMI some simple bitstuffing is needed before feeding it to the rescaler if input resolution doesn't match the display panel resolution. For CVBS or "TV" usage a tuner and colour decode from a quadrature amplitude modulation (QAM) to Luminance (Y), Blue-Y (U), Red-Y (V) representation which in turn is transformed into Red, Green Blue is needed.

Robot

A robot is a virtual or mechanical artificial agent. In practice, it is usually an electro-mechanical system which, by its appearance or movements, conveys a sense that it has intent or agency of its own. The word robot can refer to both physical robots and virtual software agents, but the latter are usually referred to as bots. There is no consensus on which machines qualify as robots, but there is general agreement among experts and the public that robots tend to do some or all of the following: move around, operate a mechanical limb, sense and manipulate their environment, and exhibit intelligent behavior, especially behavior which mimics humans or other animals.

Stories of artificial helpers and companions and attempts to create them have a long history but fully autonomous machines only appeared in the 20th century. The first digitally operated and programmable robot, the Unimate, was installed in 1961 to lift hot pieces of metal from a die casting machine and stack them. Today, commercial and industrial robots are in widespread use performing jobs more cheaply or with greater accuracy and reliability than humans. They are also employed for jobs which are too dirty, dangerous or dull to be suitable for humans. Robots are widely used in manufacturing, assembly and packing, transport, earth and space exploration, surgery, weaponry, laboratory research, and mass production of consumer and industrial goods.

People have a generally positive perception of the robots they actually encounter. Domestic robots for cleaning and maintenance are increasingly common in and around homes. There is anxiety, however, over the economic effect of automation and the threat of robotic weaponry, anxiety which is not helped by the depiction of many villainous, intelligent, acrobatic robots in popular entertainment. Compared with their fictional counterparts, real robots are still benign, dim-witted, and clumsy.

Defining characteristics

KITT is mentally anthropomorphic, while ASIMO is physically anthropomorphic
KITT is mentally anthropomorphic, while ASIMO is physically anthropomorphic
KITT is mentally anthropomorphic, while ASIMO is physically anthropomorphic

While there is no single correct definition of "robot", a typical robot will have several or possibly all of the following properties.

The last property, the appearance of agency, is important when people are considering whether to call a machine a robot, or just a machine. (See anthropomorphism for examples of ascribing intent to inanimate objects.)

Mental agency
For robotic engineers, the physical appearance of a machine is less important than the way its actions are controlled. The more the control system seems to have agency of its own, the more likely the machine is to be called a robot. An important feature of agency is the ability to make choices.

  • A clockwork car is never considered a robot.
  • A remotely operated vehicle is sometimes considered a robot (or telerobot).
  • A car with an onboard computer, like Bigtrak, which could drive in a programmable sequence, might be called a robot.
  • A self-controlled car which could sense its environment and make driving decisions based on this information, such as the 1990s driverless cars of Ernst Dickmanns or the entries in the DARPA Grand Challenge, would quite likely be called a robot.
  • A sentient car, like the fictional KITT, which can make decisions, navigate freely and converse fluently with a human, is usually considered a robot.

Physical agency
However, for many laymen, if a machine appears to be able to control its arms or limbs, and especially if it appears anthropomorphic or zoomorphic (e.g. ASIMO or Aibo), it would be called a robot.

  • A player piano is rarely characterized as a robot.
  • A CNC milling machine is very occasionally characterized as a robot.
  • A factory automation arm is almost always characterized as an industrial robot.
  • An autonomous wheeled or tracked device, such as a self-guided rover or self-guided vehicle, is almost always characterized as a mobile robot or service robot.
  • A zoomorphic mechanical toy, like Roboraptor, is usually characterized as a robot.
  • A mechanical humanoid, like ASIMO, is almost always characterized as a robot, usually as a service robot.

Even for a 3-axis CNC milling machine using the same control system as a robot arm, it is the arm which is almost always called a robot, while the CNC machine is usually just a machine. Having eyes can also make a difference in whether a machine is called a robot, since humans instinctively connect eyes with sentience. However, simply being anthropomorphic is not a sufficient criterion for something to be called a robot. A robot must do something; an inanimate object shaped like ASIMO would not be considered a robot.

Definitions

A laparoscopic robotic surgery machine

It is difficult to compare numbers of robots in different countries, since there are different definitions of what a "robot" is. The International Organization for Standardization gives a definition of robot in ISO 8373: "an automatically controlled, reprogrammable, multipurpose, manipulator programmable in three or more axes, which may be either fixed in place or mobile for use in industrial automation applications." This definition is used by the International Federation of Robotics, the European Robotics Research Network (EURON), and many national standards committees.

The Robotics Institute of America (RIA) uses a broader definition: a robot is a "re-programmable multi-functional manipulator designed to move materials, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks". The RIA subdivides robots into four classes: devices that manipulate objects with manual control, automated devices that manipulate objects with predetermined cycles, programmable and servo-controlled robots with continuous point-to-point trajectories, and robots of this last type which also acquire information from the environment and move intelligently in response.

There is no one definition of robot which satisfies everyone, and many people have their own. For example, Joseph Engelberger, a pioneer in industrial robotics, once remarked: "I can't define a robot, but I know one when I see one." According to Encyclopaedia Britannica, a robot is "any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner". Merriam-Webster describes a robot as a "machine that looks like a human being and performs various complex acts (as walking or talking) of a human being", or a "device that automatically performs complicated often repetitive tasks", or a "mechanism guided by automatic controls".

Etymology

A scene from Karel Čapek's 1920 play R.U.R. (Rossum's Universal Robots), showing three robots

The word robot was introduced to the public by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots), published in 1920. The play begins in a factory that makes artificial people called robots, but they are closer to the modern ideas of androids and clones, creatures who can be mistaken for humans. They can plainly think for themselves, though they seem happy to serve. At issue is whether the robots are being exploited and the consequences of their treatment.

However, Karel Čapek himself did not coin the word; he wrote a short letter in reference to an etymology in the Oxford English Dictionary in which he named his brother, the painter and writer Josef Čapek, as its actual originator. In an article in the Czech journal Lidové noviny in 1933, he explained that he had originally wanted to call the creatures laboři (from Latin labor, work). However, he did not like the word, and sought advice from his brother Josef, who suggested "roboti". The word robota means literally work, labor or serf labor, and figuratively "drudgery" or "hard work" in Czech and many Slavic languages. Serfdom was outlawed in 1848 in Bohemia, so at the time Čapek wrote R.U.R., usage of the term robota had broadened to include various types of work, but the obsolete sense of "serfdom" would still have been known.[16][17]

The word robotics, used to describe this field of study, was coined (albeit accidentally) by the science fiction writer Isaac Asimov.

History

Many ancient mythologies include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. In Greek drama, the Deus Ex Machina was contrived, literally God from the machine, as a dramatic device that usually involved lowering a deity, usually Zeus, by wires into the play to solve a seemingly impossible problem.

In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called "The Pigeon". Hero of Alexandria (10–70 AD) created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water. Su Song built a clock tower in China in 1088 featuring mechanical figurines that chimed the hours.

Al-Jazari's programmable humanoid robots

Al-Jazari (1136–1206), a Muslim inventor during the Artuqid dynasty, designed and constructed a number of automated machines, including kitchen appliances, musical automata powered by water, and the first programmable humanoid robots in 1206. The robots appeared as four musicians on a boat in a lake, entertaining guests at royal drinking parties. His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.

Early modern developments

Tea-serving karakuri, with mechanism, 19th century. Tokyo National Science Museum.

Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci's notebooks, rediscovered in the 1950s, contain detailed drawings of a mechanical knight now known as Leonardo's robot, able to sit up, wave its arms and move its head and jaw. The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In 1738 and 1739, Jacques de Vaucanson exhibited several life-sized automatons: a flute player, a pipe player and a duck. The mechanical duck could flap its wings, crane its neck, and swallow food from the exhibitor's hand, and it gave the illusion of digesting its food by excreting matter stored in a hidden compartment. Complex mechanical toys and animals built in Japan in the 1700s were described in the Karakuri zui (Illustrated Machinery, 1796).

Modern developments

The Japanese craftsman Hisashige Tanaka (1799–1881), known as "Japan's Edison", created an array of extremely complex mechanical toys, some of which served tea, fired arrows drawn from a quiver, and even painted a Japanese kanji character.[23] In 1898 Nikola Tesla publicly demonstrated a radio-controlled torpedo.[24] Based on patents for "teleautomation", Tesla hoped to develop it into a weapon system for the US Navy.[25][26]

The first Unimate

In 1926, Westinghouse Electric Corporation created Televox, the first robot put to useful work. They followed Televox with a number of other simple robots, including one called Rastus, made in the crude image of a black man. In the 1930s, they created a humanoid robot known as Elektro for exhibition purposes, including the 1939 and 1940 World's Fairs.] In 1928, Japan's first robot, Gakutensoku, was designed and constructed by biologist Makoto Nishimura.

The first electronic autonomous robots were created by William Grey Walter of the Burden Neurological Institute at Bristol, England in 1948 and 1949. They were named Elmer and Elsie. These robots could sense light and contact with external objects, and use these stimuli to navigate.

The first truly modern robot, digitally operated and programmable, was invented by George Devol in 1954 and was ultimately called the Unimate. Devol sold the first Unimate to General Motors in 1960, and it was installed in 1961 in a plant in Trenton, New Jersey to lift hot pieces of metal from a die casting machine and stack them.

Timeline

Date Significance Robot Name Inventor
First century A.D. and earlier Descriptions of more than 100 machines and automata, including a fire engine, a wind organ, a coin-operated machine, and a steam-powered engine, in Pneumatica and Automata by Heron of Alexandria
Ctesibius of Alexandria, Philo of Byzantium, Heron of Alexandria, and others
1206 First programmable humanoid automatons Boat with four robotic musicians Al-Jazari
c. 1495 Designs for a humanoid robot Mechanical knight Leonardo da Vinci
1738 Mechanical duck that was able to eat, flap its wings, and excrete Digesting Duck Jacques de Vaucanson
1800s Japanese mechanical toys that served tea, fired arrows, and painted Karakuri toys Hisashige Tanaka
1921 First fictional automata called "robots" appear in the play R.U.R. Rossum's Universal Robots Karel Čapek
1930s Humanoid robot exhibited at the 1939 and 1940 World's Fairs Elektro Westinghouse Electric Corporation
1948 Simple robots exhibiting biological behaviors[31] Elsie and Elmer William Grey Walter
1956 First commercial robot, from the Unimation company founded by George Devol and Joseph Engelberger, based on Devol's patents[32] Unimate George Devol
1961 First installed industrial robot Unimate George Devol
1963 First palletizing robot[33] Palletizer Fuji Yusoki Kogyo
1973 First robot with six electromechanically driven axes[34] Famulus KUKA Robot Group
1975 Programmable universal manipulation arm, a Unimation product PUMA Victor Scheinman

Contemporary uses

At present there are 2 main types of robots, based on their use: General-purpose autonomous robots and Purpose-build robots.

General-purpose autonomous robots

A general-purpose robot acts as a guide during the day and a security guard at night

General-purpose autonomous robots are robots that typically mimic human behavior and are often build to be physically similar to humans as well. This type of robot is therefore also often called a humanoid robot. General-purpose autonomous robots are not as flexible as people, but they often can navigate independently in known spaces. Like computers, general-purpose robots can link with software and accessories that increase their usefulness. They may recognize people or objects, talk, provide companionship, monitor environmental quality, pick up supplies and perform other useful tasks. General-purpose robots may perform a variety of tasks simultaneously or they may take on different roles at different times of day.

Purpose-build robots

In 2006, there were an estimated 3,540,000 service robots in use, and an estimated 950,000 industrial robots. A different estimate counted more than one million robots in operation worldwide in the first half of 2008, with roughly half in Asia, 32% in Europe, 16% in North America, 1% in Australasia and 1% in Africa. Industrial and service robots can be placed into roughly two classifications based on the type of job they do. The first category includes tasks which a robot can do with greater productivity, accuracy, or endurance than humans; the second category consists of dirty, dangerous or dull jobs which humans find undesirable.

Increased productivity, accuracy, and endurance

A Pick and Place robot in a factory

Many factory jobs are now performed by robots. This has led to cheaper mass-produced goods, including automobiles and electronics. Stationary manipulators used in factories have become the largest market for robots. In 2006, there were an estimated 3,540,000 service robots in use, and an estimated 950,000 industrial robots. A different estimate counted more than one million robots in operation worldwide in the first half of 2008, with roughly half in Asia, 32% in Europe, 16% in North America, 1% in Australasia and 1% in Africa.

Some examples of factory robots:

  • Car production: Over the last three decades automobile factories have become dominated by robots. A typical factory contains hundreds of industrial robots working on fully automated production lines, with one robot for every ten human workers. On an automated production line, a vehicle chassis on a conveyor is welded, glued, painted and finally assembled at a sequence of robot stations.
  • Packaging: Industrial robots are also used extensively for palletizing and packaging of manufactured goods, for example for rapidly taking drink cartons from the end of a conveyor belt and placing them into boxes, or for loading and unloading machining centers.
  • Electronics: Mass-produced printed circuit boards (PCBs) are almost exclusively manufactured by pick-and-place robots, typically with SCARA manipulators, which remove tiny electronic components from strips or trays, and place them on to PCBs with great accuracy.Such robots can place hundreds of thousands of components per hour, far out-performing a human in speed, accuracy, and reliability.
Automated guided vehicle carrying medical supplies and records

* Automated guided vehicles (AGVs): Mobile robots, following markers or wires in the floor, or using vision or lasers, are used to transport goods around large facilities, such as warehouses, container ports, or hospitals.

    • Early AGV-Style Robots were limited to tasks that could be accurately defined and had to be performed the same way every time. Very little feedback or intelligence was required, and the robots needed only the most basic exteroceptors (sensors). The limitations of these AGVs are that their paths are not easily altered and they cannot alter their paths if obstacles block them. If one AGV breaks down, it may stop the entire operation.
    • Interim AGV-Technologies developed that deploy triangulation from beacons or bar code grids for scanning on the floor or ceiling. In most factories, triangulation systems tend to require moderate to high maintenance, such as daily cleaning of all beacons or bar codes. Also, if a tall pallet or large vehicle blocks beacons or a bar code is marred, AGVs may become lost. Often such AGVs are designed to be used in human-free environments.
    • Newer AGVs such as the Speci-Minder, ADAM, Tug and PatrolBot Gofer are designed for people-friendly workspaces. They navigate by recognizing natural features. 3D scanners or other means of sensing the environment in two or three dimensions help to eliminate cumulative errors in dead-reckoning calculations of the AGV's current position. Some AGVs can create maps of their environment using scanning lasers with simultaneous localization and mapping (SLAM) and use those maps to navigate in real time with other path planning and obstacle avoidance algorithms. They are able to operate in complex environments and perform non-repetitive and non-sequential tasks such as transporting photomasks in a semiconductor lab, specimens in hospitals and goods in warehouses. For dynamic areas, such as warehouses full of pallets, AGVs require additional strategies. Only a few vision-augmented systems currently claim to be able to navigate reliably in such environments.

Dirty, dangerous, dull or inaccessible tasks

A U.S. Marine Corps technician prepares to use a telerobot to detonate a buried improvised explosive device near Camp Fallujah, Iraq

There are many jobs which humans would rather leave to robots. The job may be boring, such as domestic cleaning, or dangerous, such as exploring inside a volcano. Other jobs are physically inaccessible, such as exploring another planet, cleaning the inside of a long pipe, or performing laparoscopic surgery.

  • Telerobots: When a human cannot be present on site to perform a job because it is dangerous, far away, or inaccessible, teleoperated robots, or telerobots are used. Rather than following a predetermined sequence of movements, a telerobot is controlled from a distance by a human operator. The robot may be in another room or another country, or may be on a very different scale to the operator. For instance, a laparoscopic surgery robot allows the surgeon to work inside a human patient on a relatively small scale compared to open surgery, significantly shortening recovery time.When disabling a bomb, the operator sends a small robot to disable it. Several authors have been using a device called the Longpen to sign books remotely. Teleoperated robot aircraft, like the Predator Unmanned Aerial Vehicle, are increasingly being used by the military. These pilotless drones can search terrain and fire on targets.Hundreds of robots such as iRobot's Packbot and the Foster-Miller TALON are being used in Iraq and Afghanistan by the U.S. military to defuse roadside bombs or Improvised Explosive Devices (IEDs) in an activity known as explosive ordnance disposal (EOD).
The Roomba domestic vacuum cleaner robot does a single, menial job
  • In the home: As prices fall and robots become smarter and more autonomous, simple robots dedicated to a single task work in over a million homes. They are taking on simple but unwanted jobs, such as vacuum cleaning and floor washing, and lawn mowing. Some find these robots to be cute and entertaining, which is one reason that they can sell very well.
  • Elder Care: The population is aging in many countries, especially Japan, meaning that there are increasing numbers of elderly people to care for, but relatively fewer young people to care for them.[54][55] Humans make the best carers, but where they are unavailable, robots are gradually being introduced.

Types of robots

TOPIO, a humanoid robot can play ping-pong, developed by TOSY.

Robots can also be classified by their specificity of purpose. A robot might be designed to perform one particular task extremely well, or a range of tasks less well. Of course, all robots by their nature can be re-programmed to behave differently, but some are limited by their physical form. For example, a factory robot arm can perform jobs such as cutting, welding, gluing, or acting as a fairground ride, while a pick-and-place robot can only populate printed circuit boards.

Research robots

While most robots today are installed in factories or homes, performing labour or life saving jobs, many new types of robot are being developed in laboratories around the world. Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robot, alternative ways to think about or design robots, and new ways to manufacture them. It is expected that these new types of robot will be able to solve real world problems when they are finally realized.

A microfabricated electrostatic gripper holding some silicon nanowires.
  • Nanorobots: Nanorobotics is the still largely hypothetical technology of creating machines or robots at or close to the scale of a nanometer (10-9 meters). Also known as nanobots or nanites, they would be constructed from molecular machines. So far, researchers have mostly produced only parts of these complex systems, such as bearings, sensors, and Synthetic molecular motors, but functioning robots have also been made such as the entrants to the Nanobot Robocup contest. Researchers also hope to be able to create entire robots as small as viruses or bacteria, which could perform tasks on a tiny scale. Possible applications include micro surgery (on the level of individual cells), utility fog, manufacturing, weaponry and cleaning. Some people have suggested that if there were nanobots which could reproduce, the earth would turn into "grey goo", while others argue that this hypothetical outcome is nonsense.[62][63]
  • Soft Robots: Robots with silicone bodies and flexible actuators (air muscles, electroactive polymers, and ferrofluids), controlled using fuzzy logic and neural networks, look and feel different from robots with rigid skeletons, and are capable of different behaviors.
  • Reconfigurable Robots: A few researchers have investigated the possibility of creating robots which can alter their physical form to suit a particular task, like the fictional T-1000. Real robots are nowhere near that sophisticated however, and mostly consist of a small number of cube shaped units, which can move relative to their neighbours, for example SuperBot. Algorithms have been designed in case any such robots become a reality.
A swarm of robots from the Open-source Micro-robotic Project
  • Swarm robots: Inspired by colonies of insects such as ants and bees, researchers are modeling the behavior of swarms of thousands of tiny robots which together perform a useful task, such as finding something hidden, cleaning, or spying. Each robot is quite simple, but the emergent behavior of the swarm is more complex. The whole set of robots can be considered as one single distributed system, in the same way an ant colony can be considered a superorganism, exhibiting swarm intelligence. The largest swarms so far created include the iRobot swarm, the SRI/MobileRobots CentiBots project and the Open-source Micro-robotic Project swarm, which are being used to research collective behaviors. Swarms are also more resistant to failure. Whereas one large robot may fail and ruin a mission, a swarm can continue even if several robots fail. This could make them attractive for space exploration missions, where failure can be extremely costly.
  • Haptic interface robots: Robotics also has application in the design of virtual reality interfaces. Specialized robots are in widespread use in the haptic research community. These robots, called "haptic interfaces" allow touch-enabled user interaction with real and virtual environments. Robotic forces allow simulating the mechanical properties of "virtual" objects, which users can experience through their sense of touch.. Haptic interfaces are also used in robot-aided rehabilitation.

Potential problems

Fears and concerns about robots have been repeatedly expressed in a wide range of books and films. A common theme is the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race. (See The Terminator, Runaway, Blade Runner, Robocop, the Replicators in Stargate, the Cylons in Battlestar Galactica, The Matrix, and I, Robot.) Some fictional robots are programmed to kill and destroy; others gain superhuman intelligence and abilities by upgrading their own software and hardware. Examples of popular media where the robot becomes evil are 2001: A Space Odyssey, Red Planet (film), ... Another common theme is the reaction, sometimes called the "uncanny valley", of unease and even revulsion at the sight of robots that mimic humans too closely. Frankenstein (1818), often called the first science fiction novel, has become synonymous with the theme of a robot or monster advancing beyond its creator. In the TV show, Futurama, the robots are portrayed as humanoid figures that live alongside humans, not as robotic butlers. They still work in industry, but these robots carry out daily lives.

Manuel De Landa has noted that "smart missiles" and autonomous bombs equipped with artificial perception can be considered robots, and they make some of their decisions autonomously. He believes this represents an important and dangerous trend in which humans are handing over important decisions to machines.

Marauding robots may have entertainment value, but unsafe use of robots constitutes an actual danger. A heavy industrial robot with powerful actuators and unpredictably complex behavior can cause harm, for instance by stepping on a human's foot or falling on a human. Most industrial robots operate inside a security fence which separates them from human workers, but not all. The first fatality involving a robot was Robert Williams, who was struck by a robotic arm at a casting plant in Flat Rock, Michigan on January 25, 1979. The second was 37-year-old Kenji Urada, a Japanese factory worker, in 1981. Urada was performing routine maintenance on the robot, but neglected to shut it down properly, and was accidentally pushed into a grinding machine.For the last 20 years, there have been many claims that Robots will be in every household, such as a robitic butler and similar. This has not heppened yet, and it is doubtful this might happen in the next 10 years.

Literature

A gynoid, or robot designed to resemble a woman, can appear comforting to some people and disturbing to others[72]

Robotic characters, androids (artificial men/women) or gynoids (artificial women), and cyborgs (also "bionic men/women", or humans with significant mechanical enhancements) have become a staple of science fiction.

The first reference in Western literature to mechanical servants appears in Homer's Iliad. In Book XVIII, Hephaestus, god of fire, creates new armor for the hero Achilles, assisted by robots.[76] According to the Rieu translation, "Golden maidservants hastened to help their master. They looked like real women and could not only speak and use their limbs but were endowed with intelligence and trained in handwork by the immortal gods." Of course, the words "robot" or "android" are not used to describe them, but they are nevertheless mechanical devices human in appearance.

The most prolific author of stories about robots was Isaac Asimov (1920–1992), who placed robots and their interaction with society at the center of many of his works. Asimov carefully considered the problem of the ideal set of instructions robots might be given in order to lower the risk to humans, and arrived at his Three Laws of Robotics: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings, except where such orders would conflict with the First Law; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. These were introduced in his 1942 short story "Runaround", although foreshadowed in a few earlier stories. Later, Asimov added the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm"; the rest of the laws are modified sequentially to acknowledge this.

According to the Oxford English Dictionary, the first passage in Asimov's short story "Liar!" (1941) that mentions the First Law is the earliest recorded use of the word robotics. Asimov was not initially aware of this; he assumed the word already existed by analogy with mechanics, hydraulics, and other similar terms denoting branches of applied knowledge.