It is impossible to avoid the computerization process of an entire country. Computers are nowadays used everywhere: in schools, at home, in companies, institutions, and so on.
One of the bad aspects of computer use is that even the children who cannot read and write yet, are already used to this machine. They develop at a very early age the habit of playing on the computers for hours on end.
According to the latest researches, computers do not improve our health and that they can seriously damage our system when they are not properly used.
There certainly is a quite sharp competition in the computer world. Due to this aspect, several different opinions have been developed on this topic. Computer producers assert that computers are absolutely un-harmful. At that time protection remedies producers had different opinions.
The work done by a computer operator is generally quite exhausting and intensive, regardless of the country. For instance, Germans include this type of work among the 40 most health damaging jobs. It is considered that working on the computer should not exceed 50% of the working period.
When we work at the monitor, we need to read, type, analyze something, correct mistakes, and perhaps do all that more than once. Yet in exchange, the eyes have to try adjusting to all that pressure, and for this reason, we may assert that the computer really does have a negative impact on our eyesight.
Even the celebrated multi-millionaire Microsoft manager, Bill Gates, has severely deteriorated his eyesight because of the computer use. People for whom computer work is their bread and butter have the most health complaints because of muscle and joint diseases. This might be restricted to neck torpor, pain in the shoulders and in the loins, etc. However, there can be more grievous problems such as the carpal tunnel syndrome, which refers to the damage of arm nerves due to the excessive time spent working on the computer.
All this bad effect of the computer on our eyes can be found under the name of CVS (Computer Vision Syndrome), which includes dryness of the eyes, eyestrain, backache, neck ache, wrest ache, reduced acuity of vision, distress, reduced capacity of concentration, and so on.
Those who need to spend a lot of time in front of a computer encounter two main problems: the excess of information and the electromagnetic fields.
Firstly, we might try and see what that excess, that overload of information refers to. Whereas our heart spends energy for blood circulation, and while our lungs need energy for the breathing process, when it comes to our brain, it requires energy for managing pieces of information. Some scientists consider that the human brain uses 1,200 Kcal every day.
Thus, the problem occurs when our brain needs to take away the energy resources from other organs, in order to supplement for the energy lapse it has to face. Because of that, specialists like computer programmers and engineers are often under a high level of mental pressure, which may lead to anxiety, lack in energy, emotional instability, inefficiency in completing the required work.
Another damage brought about by computer work is the unavoidable exposure to the so-called electromagnetic fields. Contrary to some people’s opinion, they do affect the biological processes of our body, which can be even more sensitive to weak electromagnetic fields than to strong electromagnetic fields.
In fact, some persons who do not react in any way to strong electromagnetic fields might feel certain dizziness because of exposure to weak electromagnetic fields. That is because those low intensity magnetic fields reduce the intensity of blood pressure. For that reason, we are very likely to sense a decrease in our body temperature after over-using the computer.
Of course, we may find several solutions to avoid or prevent these bad influences of working on the computer. For instance, we could start with an easier project and leave the difficult one until later, when we feel more able to do it. If not, we might simply turn to simple activities, in order to turn our attention from the disturbing aspects of information overload.
Passive smoking is also not recommended, namely taking breaks with heavy smokers, since in strongly affects the power of concentration and gives a sensation of dizziness.
One should also avoid drinking strong coffee, for it decreases one’s nervous system energy level, and which apparently cannot be replaced for a quite long period of time.
computer
Wednesday, November 24, 2010
history of computer in nepal
There is not a long history of computers in Nepal.Nepal hired some types of calculators and computers for it's census calculation.Following list shows it's history in Nepal.· In 2018 BS an electronic calculator called "Facit" was used for census.· In 2028 BS census IBM 1401 a second generation mainframe computer was used.· In 2031 BS a center for Electronic Data Processing ,Later renamed to National Computer Center(NCC),was established for national data processing and computer training.· In 2038 BS ICL 2950/10 a second generation mainframe computer was used for census.· Now-a-days probably each and every institutions,business organizations,communication centers,ticket counters etc are using computers.
The history of computer in Nepal is not that old since Nepal has not given any contribution in the development of evolution of computer. It was in 2028 B.S. when HMG brought IBM 1401 (a Second Generation computer) on rent for Rs. 1 lakhs and 25 thousand per month to process census data. Later the computer was bought by National Computer Center (NCC). In 2038 B.S., a fourth generation computer was imported with the aid of UNDP and UNFPA from England for 20 lakhs US dollars. Its name was ICL 2950/10. This computer had 64 terminals and it is kept in museum now.
At that time British Government helped to develop manpower of NCC. In the meantime Nepalese students went to India, Thailand and USA for the computer education themselves. In 2039 B.S., microcomputers such as Apple, Vector, Sins, etc were imported by private companies and individuals. Many private companies like Computer Consultancy (CC), Management Information Processing System (MIPS), Data System International (DSI), etc were established. Such private companies started selling computers and training people in other to produce manpower in Nepal itself.
Nowadays, computers with faster processing and larger storage are found cheaply in Nepalese market. Students are given computer education from school level. At present Computer Association of Nepal (CAN) is the governing body of nepal.
Now the world is progressing much more in science and technology. This is the 21st century where the demand of the technology is more. Information technology made the world smaller and smaller because people can access the whole world from one part of the world. They can shop and get information of the world with one click.
Our country Nepal is quite developing in Information Technology. Nowadays in Nepal different college have started teaching information technology courses to the students. From these courses, student can compete with the international market in IT field. Nowadays, from class II computer education are given to the students in some schools. But the education system of our country is aged. The courses that have been collapsed from the international market are being used in our schools and colleges.
To participate in international market, our education system should be changed and the new system should be raised. From class II to Bachelor level students get bored learning all this history, generation, computer system, and other replicating courses. The market based computer course should be taught in the schools and colleges, so that the unemployment problem can be solved to some extent. If the students get the market based IT course then they can even work in single PC at home and they can earn for their own.
We want to suggest the IT education system of Nepal should be changed and new course should be started which helps the students like us to established our own IT career and work for the country and the unemployment problem of the country can be solved somehow by the youth of the country. Some universities should provide the online graduation course so that student can work while studying. Some of our college say that the student can study in morning class and work in day but after joining the college the morning classes run up to 12PM-2Pm and who will give part time job to those student who have to work and study by the money collected by their work. If such colleges run the online courses, student can learn the course while they are free from the work. We should change the education system of Nepal. If student cannot get the market based courses, how can they work in the market though the course they have learned. So we have to think about that unless the education system of the Nepal is modified, the economic and social status of the country cannot build up.
The history of computer in Nepal is not that old since Nepal has not given any contribution in the development of evolution of computer. It was in 2028 B.S. when HMG brought IBM 1401 (a Second Generation computer) on rent for Rs. 1 lakhs and 25 thousand per month to process census data. Later the computer was bought by National Computer Center (NCC). In 2038 B.S., a fourth generation computer was imported with the aid of UNDP and UNFPA from England for 20 lakhs US dollars. Its name was ICL 2950/10. This computer had 64 terminals and it is kept in museum now.
At that time British Government helped to develop manpower of NCC. In the meantime Nepalese students went to India, Thailand and USA for the computer education themselves. In 2039 B.S., microcomputers such as Apple, Vector, Sins, etc were imported by private companies and individuals. Many private companies like Computer Consultancy (CC), Management Information Processing System (MIPS), Data System International (DSI), etc were established. Such private companies started selling computers and training people in other to produce manpower in Nepal itself.
Nowadays, computers with faster processing and larger storage are found cheaply in Nepalese market. Students are given computer education from school level. At present Computer Association of Nepal (CAN) is the governing body of nepal.
Now the world is progressing much more in science and technology. This is the 21st century where the demand of the technology is more. Information technology made the world smaller and smaller because people can access the whole world from one part of the world. They can shop and get information of the world with one click.
Our country Nepal is quite developing in Information Technology. Nowadays in Nepal different college have started teaching information technology courses to the students. From these courses, student can compete with the international market in IT field. Nowadays, from class II computer education are given to the students in some schools. But the education system of our country is aged. The courses that have been collapsed from the international market are being used in our schools and colleges.
To participate in international market, our education system should be changed and the new system should be raised. From class II to Bachelor level students get bored learning all this history, generation, computer system, and other replicating courses. The market based computer course should be taught in the schools and colleges, so that the unemployment problem can be solved to some extent. If the students get the market based IT course then they can even work in single PC at home and they can earn for their own.
We want to suggest the IT education system of Nepal should be changed and new course should be started which helps the students like us to established our own IT career and work for the country and the unemployment problem of the country can be solved somehow by the youth of the country. Some universities should provide the online graduation course so that student can work while studying. Some of our college say that the student can study in morning class and work in day but after joining the college the morning classes run up to 12PM-2Pm and who will give part time job to those student who have to work and study by the money collected by their work. If such colleges run the online courses, student can learn the course while they are free from the work. We should change the education system of Nepal. If student cannot get the market based courses, how can they work in the market though the course they have learned. So we have to think about that unless the education system of the Nepal is modified, the economic and social status of the country cannot build up.
internet
The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.
Most traditional communications media including telephone, music, film, and television are being reshaped or redefined by the Internet. Newspaper, book and other print publishing are having to adapt to Web sites and blogging. The Internet has enabled or accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The origins of the Internet reach back to the 1960s with both private and United States military research into robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by then an international network in the mid 1990s resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population used the services of the Internet.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
Internet is a short form of the technical term "internetwork",[1] the result of interconnecting computer networks with special gateways (routers). The Internet is also often referred to as the net.
The term the Internet, when referring to the entire global system of IP networks, has traditionally been treated as a proper noun and written with an initial capital letter. In the media and popular culture a trend has developed to regard it as a generic term or common noun and thus write it as "the internet", without capitalization.
The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.[2]
In many technical illustrations when the precise location or interrelation of Internet resources is not important, extended networks such as the Internet are often depicted as a cloud.[3] The verbal image has been formalized in the newer concept of cloud computing
The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA or DARPA) in February 1958 to regain a technological lead.[4][5] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. The IPTO's purpose was to find ways to address the US Military's concern about survivability of their communications networks, and as a first step interconnect their computers at the Pentagon, Cheyenne Mountain, and Strategic Air Command headquarters (SAC). J. C. R. Licklider, a promoter of universal networking, was selected to head the IPTO. Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.
Professor Leonard Kleinrock with one of the first ARPANET Interface Message Processors at UCLA
At the IPTO, Licklider's successor Ivan Sutherland in 1965 got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran,[6] who had written an exhaustive study for the United States Air Force that recommended packet switching (opposed to circuit switching) to achieve better network robustness and disaster survivability. Roberts had worked at the MIT Lincoln Laboratory originally established to work on the design of the SAGE system. UCLA professor Leonard Kleinrock had provided the theoretical foundations for packet networks in 1962, and later, in the 1970s, for hierarchical routing, concepts which have been the underpinning of the development towards today's Internet.
Sutherland's successor Robert Taylor convinced Roberts to build on his early packet switching successes and come and be the IPTO Chief Scientist. Once there, Roberts prepared a report called Resource Sharing Computer Networks which was approved by Taylor in June 1968 and laid the foundation for the launch of the working ARPANET the following year.
After much work, the first two nodes of what would become the ARPANET were interconnected between Kleinrock's Network Measurement Center at the UCLA's School of Engineering and Applied Science and Douglas Engelbart's NLS system at SRI International (SRI) in Menlo Park, California, on 29 October 1969. The third site on the ARPANET was the Culler-Fried Interactive Mathematics center at the University of California at Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971.
The ARPANET was one of the eve networks of today's Internet. In an independent development, Donald Davies at the UK National Physical Laboratory also discovered the concept of packet switching in the early 1960s, first giving a talk on the subject in 1965, after which the teams in the new field from two sides of the Atlantic ocean first became acquainted. It was actually Davies' coinage of the wording "packet" and "packet switching" that was adopted as the standard terminology. Davies also built a packet switched network in the UK called the Mark I in 1970. [7] Bolt Beranek and Newman (BBN), the private contractors for ARPANET, set out to create a separate commercial version after establishing "value added carriers" was legalized in the U.S.[8] The network they established was called Telenet and began operation in 1975, installing free public dial-up access in cities throughout the U.S. Telenet was the first packet-switching network open to the general public.[9]
Following the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.
A plaque commemorating the birth of the Internet at Stanford University
X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period.
The early ARPANET ran on the Network Control Program (NCP), implementing the host-to-host connectivity and switching layers of the protocol stack, designed and first implemented in December 1970 by a team called the Network Working Group (NWG) led by Steve Crocker. To respond to the network's rapid growth as more and more locations connected, Vinton Cerf and Robert Kahn developed the first description of the now widely used TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by 1 January 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.
The opening of the NSFNET to other networks began in 1988.[10] The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) began operations: UUNET, PSINet, and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet (by that time renamed to Sprintnet), Tymnet, Compuserve and JANET were interconnected with the growing Internet in the 1980s as the TCP/IP protocol became increasingly popular. The adaptability of TCP/IP to existing communication networks allowed for rapid growth. The open availability of the specifications and reference code permitted commercial vendors to build interoperable network components, such as routers, making standardized network gear available from many companies. This aided in the rapid growth of the Internet and the proliferation of local-area networking. It seeded the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.
This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server.
Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organization for particle research, publicized the new World Wide Web project. The Web was invented by British scientist Tim Berners-Lee in 1989. An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.
Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.[11] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[12] The estimated population of Internet users is 1.97 billion as of 30 June 2010.[13]
From 2009 onward, the Internet is expected to grow significantly in Brazil, Russia, India, China, and Indonesia (BRICI countries). These countries have large populations and moderate to high economic growth, but still low Internet penetration rates. In 2009, the BRICI countries represented about 45 percent of the world's population and had approximately 610 million Internet users, but by 2015, Internet users in BRICI countries will double to 1.2 billion, and will triple in Indonesia.
The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF).[16] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called a Request for Comments (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.
The Internet Standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the Transport Layer connects applications on different hosts via the network (e.g., client–server model) with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer, the Link Layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.
The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011.[17] A new protocol version, IPv6, was developed in the mid 1990s which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion.[18]
IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations).
Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system".[19] The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated
The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations).
Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system".[19] The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated
Most traditional communications media including telephone, music, film, and television are being reshaped or redefined by the Internet. Newspaper, book and other print publishing are having to adapt to Web sites and blogging. The Internet has enabled or accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The origins of the Internet reach back to the 1960s with both private and United States military research into robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by then an international network in the mid 1990s resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population used the services of the Internet.
The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.
Internet is a short form of the technical term "internetwork",[1] the result of interconnecting computer networks with special gateways (routers). The Internet is also often referred to as the net.
The term the Internet, when referring to the entire global system of IP networks, has traditionally been treated as a proper noun and written with an initial capital letter. In the media and popular culture a trend has developed to regard it as a generic term or common noun and thus write it as "the internet", without capitalization.
The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.[2]
In many technical illustrations when the precise location or interrelation of Internet resources is not important, extended networks such as the Internet are often depicted as a cloud.[3] The verbal image has been formalized in the newer concept of cloud computing
The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA or DARPA) in February 1958 to regain a technological lead.[4][5] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. The IPTO's purpose was to find ways to address the US Military's concern about survivability of their communications networks, and as a first step interconnect their computers at the Pentagon, Cheyenne Mountain, and Strategic Air Command headquarters (SAC). J. C. R. Licklider, a promoter of universal networking, was selected to head the IPTO. Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.
Professor Leonard Kleinrock with one of the first ARPANET Interface Message Processors at UCLA
At the IPTO, Licklider's successor Ivan Sutherland in 1965 got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran,[6] who had written an exhaustive study for the United States Air Force that recommended packet switching (opposed to circuit switching) to achieve better network robustness and disaster survivability. Roberts had worked at the MIT Lincoln Laboratory originally established to work on the design of the SAGE system. UCLA professor Leonard Kleinrock had provided the theoretical foundations for packet networks in 1962, and later, in the 1970s, for hierarchical routing, concepts which have been the underpinning of the development towards today's Internet.
Sutherland's successor Robert Taylor convinced Roberts to build on his early packet switching successes and come and be the IPTO Chief Scientist. Once there, Roberts prepared a report called Resource Sharing Computer Networks which was approved by Taylor in June 1968 and laid the foundation for the launch of the working ARPANET the following year.
After much work, the first two nodes of what would become the ARPANET were interconnected between Kleinrock's Network Measurement Center at the UCLA's School of Engineering and Applied Science and Douglas Engelbart's NLS system at SRI International (SRI) in Menlo Park, California, on 29 October 1969. The third site on the ARPANET was the Culler-Fried Interactive Mathematics center at the University of California at Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971.
The ARPANET was one of the eve networks of today's Internet. In an independent development, Donald Davies at the UK National Physical Laboratory also discovered the concept of packet switching in the early 1960s, first giving a talk on the subject in 1965, after which the teams in the new field from two sides of the Atlantic ocean first became acquainted. It was actually Davies' coinage of the wording "packet" and "packet switching" that was adopted as the standard terminology. Davies also built a packet switched network in the UK called the Mark I in 1970. [7] Bolt Beranek and Newman (BBN), the private contractors for ARPANET, set out to create a separate commercial version after establishing "value added carriers" was legalized in the U.S.[8] The network they established was called Telenet and began operation in 1975, installing free public dial-up access in cities throughout the U.S. Telenet was the first packet-switching network open to the general public.[9]
Following the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.
A plaque commemorating the birth of the Internet at Stanford University
X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period.
The early ARPANET ran on the Network Control Program (NCP), implementing the host-to-host connectivity and switching layers of the protocol stack, designed and first implemented in December 1970 by a team called the Network Working Group (NWG) led by Steve Crocker. To respond to the network's rapid growth as more and more locations connected, Vinton Cerf and Robert Kahn developed the first description of the now widely used TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by 1 January 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.
The opening of the NSFNET to other networks began in 1988.[10] The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) began operations: UUNET, PSINet, and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet (by that time renamed to Sprintnet), Tymnet, Compuserve and JANET were interconnected with the growing Internet in the 1980s as the TCP/IP protocol became increasingly popular. The adaptability of TCP/IP to existing communication networks allowed for rapid growth. The open availability of the specifications and reference code permitted commercial vendors to build interoperable network components, such as routers, making standardized network gear available from many companies. This aided in the rapid growth of the Internet and the proliferation of local-area networking. It seeded the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.
This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server.
Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organization for particle research, publicized the new World Wide Web project. The Web was invented by British scientist Tim Berners-Lee in 1989. An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.
Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%.[11] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[12] The estimated population of Internet users is 1.97 billion as of 30 June 2010.[13]
From 2009 onward, the Internet is expected to grow significantly in Brazil, Russia, India, China, and Indonesia (BRICI countries). These countries have large populations and moderate to high economic growth, but still low Internet penetration rates. In 2009, the BRICI countries represented about 45 percent of the world's population and had approximately 610 million Internet users, but by 2015, Internet users in BRICI countries will double to 1.2 billion, and will triple in Indonesia.
The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF).[16] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called a Request for Comments (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.
The Internet Standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the Transport Layer connects applications on different hosts via the network (e.g., client–server model) with appropriate data exchange methods. Underlying these layers are the core networking technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer, the Link Layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.
The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011.[17] A new protocol version, IPv6, was developed in the mid 1990s which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion.[18]
IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations).
Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system".[19] The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated
The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations).
Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system".[19] The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated
history of computer
The development of the modern day computer was the result of advances in technologies and man's need to quantify. Papyrus helped early man to record language and numbers. The abacus was one of the first counting machines..
Some of the earlier mechanical counting machines lacked the technology to make the design work. For instance, some had parts made of wood prior to metal manipulation and manufacturing. Imagine the wear on wooden gears. This history of computers site includes the names of early pioneers of math and computing and links to related sites about the History of Computers, for further study. This site would be a good Web adjunct to accompany any book on the History of Computers or Introduction to Computers. The "H" Section includes a link to the History of the Web Beginning at CERN which includes Bibliography and Related Links. Hitmill.com strives to always include related links for a broader educational experience.
Charles Babbage (1791-1871 was born 26 Dec 1791, the son of a London banker. In his youth he had his own private instructor in algebra and by the time he attended Trinity College, Cambridge, he was advanced in knowledge beyond his tutors in mathematics. In 1811, he co-founded the Analytical Society to promote continental mathematics and to reform the mathematics of Newton taught at the University at that time. He worked on the calculus of functions in his twenties. After being elected a Fellow of the Royal Society in 1816, Babbage played a role in the development of the Astronomical Society in 1820. In 1821 he invented the Difference Engine to compile mathematical tables. The Difference Engine was completed in 1832. Then he began work on a machine that could do any type of calculation, and this machine was the Analytical Engine completed about 1856. To learn more about Charles Babbage and his contributions to society, mathetics, engineering, astronomy, and more....
e first computers were people! That is, electronic computers (and the earlier mechanical computers) were given this name because they performed the work that had previously been assigned to people. "Computer" was originally a job title: it was used to describe those human beings (predominantly women) whose job it was to perform the repetitive calculations required to compute such things as navigational tables, tide charts, and planetary positions for astronomical almanacs. Imagine you had a job where hour after hour, day after day, you were to do nothing but compute multiplications. Boredom would quickly set in, leading to carelessness, leading to mistakes. And even on your best days you wouldn't be producing answers very fast. Therefore, inventors have been searching for hundreds of years for a way to mechanize (that is, find a mechanism that can perform) this task.
The abacus was an early aid for mathematical computations. Its only value is that it aids the memory of the human performing the calculation. A skilled abacus operator can work on addition and subtraction problems at the speed of a person equipped with a hand calculator (multiplication and division are slower). The abacus is often wrongly attributed to China. In fact, the oldest surviving abacus was used in 300 B.C. by the Babylonians. The abacus is still in use today, principally in the far east. A modern abacus consists of rings that slide over rods, but the older one pictured below dates from the time when pebbles were used for counting (the word "calculus" comes from the Latin word for pebble).
n 1617 an eccentric (some say mad) Scotsman named John Napier invented logarithms, which are a technology that allows multiplication to be performed via addition. The magic ingredient is the logarithm of each operand, which was originally obtained from a printed table. But Napier also invented an alternative to tables, where the logarithm values were carved on ivory sticks which are now called Napier's Bones.
Napier's invention led directly to the slide rule, first built in England in 1632 and still in use in the 1960's by the NASA engineers of the Mercury, Gemini, and Apollo programs which landed men on the moon.
In 1642 Blaise Pascal, at age 19, invented the Pascaline as an aid for his father who was a tax collector. Pascal built 50 of this gear-driven one-function calculator (it could only add) but couldn't sell many because of their exorbitant cost and because they really weren't that accurate (at that time it was not possible to fabricate gears with the required precision). Up until the present age when car dashboards went digital, the odometer portion of a car's speedometer used the very same mechanism as the Pascaline to increment the next wheel after each full revolution of the prior wheel. Pascal was a child prodigy. At the age of 12, he was discovered doing his version of Euclid's thirty-second proposition on the kitchen floor. Pascal went on to invent probability theory, the hydraulic press, and the syringe. Shown below is an 8 digit version of the Pascaline, and two views of a 6 digit version:
This unique computer curriculum offers 3 different environments of graduated complexity: a programmable RPN (Reverse Polish Notation) calculator, an Intel 8051 microprocessor that is programmed using assembly language, and finally the high-level C and C++ languages.
Each of these 3 languages comes complete with an integrated development environment (IDE) that provides an editor, compiler, and debugger. You get fully explained solutions to fun programming projects such as a scrolling electronic signboard, a robotic mouse in a maze, an audio peak detector using an LED bar graph, and the Breakout video game. All of these example programs have been designed to be highly visual, audible, and fun. In addition to the introduction to assembly language programming and the introduction to C and C++, this curriculum offers an introduction to Windows programming and graphical user interfaces. You can find screen shots and further description of each of these programs on the Catalog page.
How did I choose to teach assembly, C, and C++? These 3 languages are used in 89 % of the embedded devices (i.e., laser printers, camcorders, MP3 players, etc.) in your home and car. In contrast, Java is employed in only 3 % of embedded devices due to its poor performance.
This curriculum has been used and praised by degreed engineers who are already working in industry. It is also being used in universities, high schools, charter schools, and home schools (minimal computer savvy is required on the part of the homeschool parents!). A magazine that reviews educational software for children asked a computer systems administrator to evaluate the software and he concluded it was "brilliantly" done. Another review appeared in Jack Ganssle's column for the Nov 2005 issue of the Embedded Systems Design journal, a magazine for engineering professionals. In short, he loved it but you can read the full review for yourself here.
just a few years after Pascal, the German Gottfried Wilhelm Leibniz (co-inventor with Newton of calculus) managed to build a four-function (addition, subtraction, multiplication, and division) calculator that he called the stepped reckoner because, instead of gears, it employed fluted drums having ten flutes arranged around their circumference in a stair-step fashion. Although the stepped reckoner employed the decimal number system (each drum had 10 flutes), Leibniz was the first to advocate use of the binary number system which is fundamental to the operation of modern computers. Leibniz is considered one of the greatest of the philosophers but he died poor and alone.
Jacquard's technology was a real boon to mill owners, but put many loom operators out of work. Angry mobs smashed Jacquard looms and once attacked Jacquard himself. History is full of examples of labor unrest following technological innovation yet most studies show that, overall, technology has actually increased the number of jobs.
By 1822 the English mathematician Charles Babbage was proposing a steam driven calculating machine the size of a room, which he called the Difference Engine. This machine would be able to compute tables of numbers, such as logarithm tables. He obtained government funding for this project due to the importance of numeric tables in ocean navigation. By promoting their commercial and military navies, the British government had managed to become the earth's greatest empire. But in that time frame the British government was publishing a seven volume set of navigation tables which came with a companion volume of corrections which showed that the set had over 1000 numerical errors. It was hoped that Babbage's machine could eliminate errors in these types of tables. But construction of Babbage's Difference Engine proved exceedingly difficult and the project soon became the most expensive government funded project up to that point in English history. Ten years later the device was still nowhere near complete, acrimony abounded between all involved, and funding dried up. The device was never finished.
Babbage was not deterred, and by then was on to his next brainstorm, which he called the Analytic Engine. This device, large as a house and powered by 6 steam engines, would be more general purpose in nature because it would be programmable, thanks to the punched card technology of Jacquard. But it was Babbage who made an important intellectual leap regarding the punched cards. In the Jacquard loom, the presence or absence of each hole in the card physically allows a colored thread to pass or stops that thread (you can see this clearly in the earlier photo). Babbage saw that the pattern of holes could be used to represent an abstract idea such as a problem statement or the raw data required for that problem's solution. Babbage saw that there was no requirement that the problem matter itself physically pass thru the holes.
Furthermore, Babbage realized that punched paper could be employed as a storage mechanism, holding computed numbers for future reference. Because of the connection to the Jacquard loom, Babbage called the two main parts of his Analytic Engine the "Store" and the "Mill", as both terms are used in the weaving industry. The Store was where numbers were held and the Mill was where they were "woven" into new results. In a modern computer these same parts are called the memory unit and the central processing unit (CPU).
The Analytic Engine also had a key function that distinguishes computers from calculators: the conditional statement. A conditional statement allows a program to achieve different results each time it is run. Based on the conditional statement, the path of the program (that is, what statements are executed next) can be determined based upon a condition or situation that is detected at the very moment the program is running.
You have probably observed that a modern stoplight at an intersection between a busy street and a less busy street will leave the green light on the busy street until a car approaches on the less busy street. This type of street light is controlled by a computer program that can sense the approach of cars on the less busy street. That moment when the light changes from green to red is not fixed in the program but rather varies with each traffic situation. The conditional statement in the stoplight program would be something like, "if a car approaches on the less busy street and the more busy street has already enjoyed the green light for at least a minute then move the green light to the less busy street". The conditional statement also allows a program to react to the results of its own calculations. An example would be the program that the I.R.S uses to detect tax fraud. This program first computes a person's tax liability and then decides whether to alert the police based upon how that person's tax payments compare to his obligations.
Babbage befriended Ada Byron, the daughter of the famous poet Lord Byron (Ada would later become the Countess Lady Lovelace by marriage). Though she was only 19, she was fascinated by Babbage's ideas and thru letters and meetings with Babbage she learned enough about the design of the Analytic Engine to begin fashioning programs for the still unbuilt machine. While Babbage refused to publish his knowledge for another 30 years, Ada wrote a series of "Notes" wherein she detailed sequences of instructions she had prepared for the Analytic Engine. The Analytic Engine remained unbuilt (the British government refused to get involved with this one) but Ada earned her spot in history as the first computer programmer. Ada invented the subroutine and was the first to recognize the importance of looping. Babbage himself went on to invent the modern postal system, cowcatchers on trains, and the ophthalmoscope, which is still used today to treat the eye.
The next breakthrough occurred in America. The U.S. Constitution states that a census should be taken of all U.S. citizens every 10 years in order to determine the representation of the states in Congress. While the very first census of 1790 had only required 9 months, by 1880 the U.S. population had grown so much that the count for the 1880 census took 7.5 years. Automation was clearly needed for the next census. The census bureau offered a prize for an inventor to help with the 1890 census and this prize was won by Herman Hollerith, who proposed and then successfully adopted Jacquard's punched cards for the purpose of computation.
Hollerith's invention, known as the Hollerith desk, consisted of a card reader which sensed the holes in the cards, a gear driven mechanism which could count (using Pascal's mechanism which we still see in car odometers), and a large wall of dial indicators (a car speedometer is a dial indicator) to display the results of the count.
The patterns on Jacquard's cards were determined when a tapestry was designed and then were not changed. Today, we would call this a read-only form of information storage. Hollerith had the insight to convert punched cards to what is today called a read/write technology. While riding a train, he observed that the conductor didn't merely punch each ticket, but rather punched a particular pattern of holes whose positions indicated the approximate height, weight, eye color, etc. of the ticket owner. This was done to keep anyone else from picking up a discarded ticket and claiming it was his own (a train ticket did not lose all value when it was punched because the same ticket was used for each leg of a trip). Hollerith realized how useful it would be to punch (write) new cards based upon an analysis (reading) of some other set of cards. Complicated analyses, too involved to be accomplished during a single pass thru the cards, could be accomplished via multiple passes thru the cards using newly printed cards to remember the intermediate results. Unknown to Hollerith, Babbage had proposed this long before.
Hollerith's technique was successful and the 1890 census was completed in only 3 years at a savings of 5 million dollars. Interesting aside: the reason that a person who removes inappropriate content from a book or movie is called a censor, as is a person who conducts a census, is that in Roman society the public official called the "censor" had both of these jobs.
Hollerith built a company, the Tabulating Machine Company which, after a few buyouts, eventually became International Business Machines, known today as IBM. IBM grew rapidly and punched cards became ubiquitous. Your gas bill would arrive each month with a punch card you had to return with your payment. This punch card recorded the particulars of your account: your name, address, gas usage, etc. (I imagine there were some "hackers" in these days who would alter the punch cards to change their bill). As another example, when you entered a toll way (a highway that collects a fee from each driver) you were given a punch card that recorded where you started and then when you exited from the toll way your fee was computed based upon the miles you drove. When you voted in an election the ballot you were handed was a punch card. The little pieces of paper that are punched out of the card are called "chad" and were thrown as confetti at weddings. Until recently all Social Security and other checks issued by the Federal government were actually punch cards. The check-out slip inside a library book was a punch card. Written on all these cards was a phrase as common as "close cover before striking": "do not fold, spindle, or mutilate". A spindle was an upright spike on the desk of an accounting clerk. As he completed processing each receipt he would impale it on this spike. When the spindle was full, he'd run a piece of string through the holes, tie up the bundle, and ship it off to the archives. You occasionally still see spindles at restaurant cash registers.
Some of the earlier mechanical counting machines lacked the technology to make the design work. For instance, some had parts made of wood prior to metal manipulation and manufacturing. Imagine the wear on wooden gears. This history of computers site includes the names of early pioneers of math and computing and links to related sites about the History of Computers, for further study. This site would be a good Web adjunct to accompany any book on the History of Computers or Introduction to Computers. The "H" Section includes a link to the History of the Web Beginning at CERN which includes Bibliography and Related Links. Hitmill.com strives to always include related links for a broader educational experience.
Charles Babbage (1791-1871 was born 26 Dec 1791, the son of a London banker. In his youth he had his own private instructor in algebra and by the time he attended Trinity College, Cambridge, he was advanced in knowledge beyond his tutors in mathematics. In 1811, he co-founded the Analytical Society to promote continental mathematics and to reform the mathematics of Newton taught at the University at that time. He worked on the calculus of functions in his twenties. After being elected a Fellow of the Royal Society in 1816, Babbage played a role in the development of the Astronomical Society in 1820. In 1821 he invented the Difference Engine to compile mathematical tables. The Difference Engine was completed in 1832. Then he began work on a machine that could do any type of calculation, and this machine was the Analytical Engine completed about 1856. To learn more about Charles Babbage and his contributions to society, mathetics, engineering, astronomy, and more....
e first computers were people! That is, electronic computers (and the earlier mechanical computers) were given this name because they performed the work that had previously been assigned to people. "Computer" was originally a job title: it was used to describe those human beings (predominantly women) whose job it was to perform the repetitive calculations required to compute such things as navigational tables, tide charts, and planetary positions for astronomical almanacs. Imagine you had a job where hour after hour, day after day, you were to do nothing but compute multiplications. Boredom would quickly set in, leading to carelessness, leading to mistakes. And even on your best days you wouldn't be producing answers very fast. Therefore, inventors have been searching for hundreds of years for a way to mechanize (that is, find a mechanism that can perform) this task.
The abacus was an early aid for mathematical computations. Its only value is that it aids the memory of the human performing the calculation. A skilled abacus operator can work on addition and subtraction problems at the speed of a person equipped with a hand calculator (multiplication and division are slower). The abacus is often wrongly attributed to China. In fact, the oldest surviving abacus was used in 300 B.C. by the Babylonians. The abacus is still in use today, principally in the far east. A modern abacus consists of rings that slide over rods, but the older one pictured below dates from the time when pebbles were used for counting (the word "calculus" comes from the Latin word for pebble).
n 1617 an eccentric (some say mad) Scotsman named John Napier invented logarithms, which are a technology that allows multiplication to be performed via addition. The magic ingredient is the logarithm of each operand, which was originally obtained from a printed table. But Napier also invented an alternative to tables, where the logarithm values were carved on ivory sticks which are now called Napier's Bones.
Napier's invention led directly to the slide rule, first built in England in 1632 and still in use in the 1960's by the NASA engineers of the Mercury, Gemini, and Apollo programs which landed men on the moon.
In 1642 Blaise Pascal, at age 19, invented the Pascaline as an aid for his father who was a tax collector. Pascal built 50 of this gear-driven one-function calculator (it could only add) but couldn't sell many because of their exorbitant cost and because they really weren't that accurate (at that time it was not possible to fabricate gears with the required precision). Up until the present age when car dashboards went digital, the odometer portion of a car's speedometer used the very same mechanism as the Pascaline to increment the next wheel after each full revolution of the prior wheel. Pascal was a child prodigy. At the age of 12, he was discovered doing his version of Euclid's thirty-second proposition on the kitchen floor. Pascal went on to invent probability theory, the hydraulic press, and the syringe. Shown below is an 8 digit version of the Pascaline, and two views of a 6 digit version:
This unique computer curriculum offers 3 different environments of graduated complexity: a programmable RPN (Reverse Polish Notation) calculator, an Intel 8051 microprocessor that is programmed using assembly language, and finally the high-level C and C++ languages.
Each of these 3 languages comes complete with an integrated development environment (IDE) that provides an editor, compiler, and debugger. You get fully explained solutions to fun programming projects such as a scrolling electronic signboard, a robotic mouse in a maze, an audio peak detector using an LED bar graph, and the Breakout video game. All of these example programs have been designed to be highly visual, audible, and fun. In addition to the introduction to assembly language programming and the introduction to C and C++, this curriculum offers an introduction to Windows programming and graphical user interfaces. You can find screen shots and further description of each of these programs on the Catalog page.
How did I choose to teach assembly, C, and C++? These 3 languages are used in 89 % of the embedded devices (i.e., laser printers, camcorders, MP3 players, etc.) in your home and car. In contrast, Java is employed in only 3 % of embedded devices due to its poor performance.
This curriculum has been used and praised by degreed engineers who are already working in industry. It is also being used in universities, high schools, charter schools, and home schools (minimal computer savvy is required on the part of the homeschool parents!). A magazine that reviews educational software for children asked a computer systems administrator to evaluate the software and he concluded it was "brilliantly" done. Another review appeared in Jack Ganssle's column for the Nov 2005 issue of the Embedded Systems Design journal, a magazine for engineering professionals. In short, he loved it but you can read the full review for yourself here.
just a few years after Pascal, the German Gottfried Wilhelm Leibniz (co-inventor with Newton of calculus) managed to build a four-function (addition, subtraction, multiplication, and division) calculator that he called the stepped reckoner because, instead of gears, it employed fluted drums having ten flutes arranged around their circumference in a stair-step fashion. Although the stepped reckoner employed the decimal number system (each drum had 10 flutes), Leibniz was the first to advocate use of the binary number system which is fundamental to the operation of modern computers. Leibniz is considered one of the greatest of the philosophers but he died poor and alone.
Jacquard's technology was a real boon to mill owners, but put many loom operators out of work. Angry mobs smashed Jacquard looms and once attacked Jacquard himself. History is full of examples of labor unrest following technological innovation yet most studies show that, overall, technology has actually increased the number of jobs.
By 1822 the English mathematician Charles Babbage was proposing a steam driven calculating machine the size of a room, which he called the Difference Engine. This machine would be able to compute tables of numbers, such as logarithm tables. He obtained government funding for this project due to the importance of numeric tables in ocean navigation. By promoting their commercial and military navies, the British government had managed to become the earth's greatest empire. But in that time frame the British government was publishing a seven volume set of navigation tables which came with a companion volume of corrections which showed that the set had over 1000 numerical errors. It was hoped that Babbage's machine could eliminate errors in these types of tables. But construction of Babbage's Difference Engine proved exceedingly difficult and the project soon became the most expensive government funded project up to that point in English history. Ten years later the device was still nowhere near complete, acrimony abounded between all involved, and funding dried up. The device was never finished.
Babbage was not deterred, and by then was on to his next brainstorm, which he called the Analytic Engine. This device, large as a house and powered by 6 steam engines, would be more general purpose in nature because it would be programmable, thanks to the punched card technology of Jacquard. But it was Babbage who made an important intellectual leap regarding the punched cards. In the Jacquard loom, the presence or absence of each hole in the card physically allows a colored thread to pass or stops that thread (you can see this clearly in the earlier photo). Babbage saw that the pattern of holes could be used to represent an abstract idea such as a problem statement or the raw data required for that problem's solution. Babbage saw that there was no requirement that the problem matter itself physically pass thru the holes.
Furthermore, Babbage realized that punched paper could be employed as a storage mechanism, holding computed numbers for future reference. Because of the connection to the Jacquard loom, Babbage called the two main parts of his Analytic Engine the "Store" and the "Mill", as both terms are used in the weaving industry. The Store was where numbers were held and the Mill was where they were "woven" into new results. In a modern computer these same parts are called the memory unit and the central processing unit (CPU).
The Analytic Engine also had a key function that distinguishes computers from calculators: the conditional statement. A conditional statement allows a program to achieve different results each time it is run. Based on the conditional statement, the path of the program (that is, what statements are executed next) can be determined based upon a condition or situation that is detected at the very moment the program is running.
You have probably observed that a modern stoplight at an intersection between a busy street and a less busy street will leave the green light on the busy street until a car approaches on the less busy street. This type of street light is controlled by a computer program that can sense the approach of cars on the less busy street. That moment when the light changes from green to red is not fixed in the program but rather varies with each traffic situation. The conditional statement in the stoplight program would be something like, "if a car approaches on the less busy street and the more busy street has already enjoyed the green light for at least a minute then move the green light to the less busy street". The conditional statement also allows a program to react to the results of its own calculations. An example would be the program that the I.R.S uses to detect tax fraud. This program first computes a person's tax liability and then decides whether to alert the police based upon how that person's tax payments compare to his obligations.
Babbage befriended Ada Byron, the daughter of the famous poet Lord Byron (Ada would later become the Countess Lady Lovelace by marriage). Though she was only 19, she was fascinated by Babbage's ideas and thru letters and meetings with Babbage she learned enough about the design of the Analytic Engine to begin fashioning programs for the still unbuilt machine. While Babbage refused to publish his knowledge for another 30 years, Ada wrote a series of "Notes" wherein she detailed sequences of instructions she had prepared for the Analytic Engine. The Analytic Engine remained unbuilt (the British government refused to get involved with this one) but Ada earned her spot in history as the first computer programmer. Ada invented the subroutine and was the first to recognize the importance of looping. Babbage himself went on to invent the modern postal system, cowcatchers on trains, and the ophthalmoscope, which is still used today to treat the eye.
The next breakthrough occurred in America. The U.S. Constitution states that a census should be taken of all U.S. citizens every 10 years in order to determine the representation of the states in Congress. While the very first census of 1790 had only required 9 months, by 1880 the U.S. population had grown so much that the count for the 1880 census took 7.5 years. Automation was clearly needed for the next census. The census bureau offered a prize for an inventor to help with the 1890 census and this prize was won by Herman Hollerith, who proposed and then successfully adopted Jacquard's punched cards for the purpose of computation.
Hollerith's invention, known as the Hollerith desk, consisted of a card reader which sensed the holes in the cards, a gear driven mechanism which could count (using Pascal's mechanism which we still see in car odometers), and a large wall of dial indicators (a car speedometer is a dial indicator) to display the results of the count.
The patterns on Jacquard's cards were determined when a tapestry was designed and then were not changed. Today, we would call this a read-only form of information storage. Hollerith had the insight to convert punched cards to what is today called a read/write technology. While riding a train, he observed that the conductor didn't merely punch each ticket, but rather punched a particular pattern of holes whose positions indicated the approximate height, weight, eye color, etc. of the ticket owner. This was done to keep anyone else from picking up a discarded ticket and claiming it was his own (a train ticket did not lose all value when it was punched because the same ticket was used for each leg of a trip). Hollerith realized how useful it would be to punch (write) new cards based upon an analysis (reading) of some other set of cards. Complicated analyses, too involved to be accomplished during a single pass thru the cards, could be accomplished via multiple passes thru the cards using newly printed cards to remember the intermediate results. Unknown to Hollerith, Babbage had proposed this long before.
Hollerith's technique was successful and the 1890 census was completed in only 3 years at a savings of 5 million dollars. Interesting aside: the reason that a person who removes inappropriate content from a book or movie is called a censor, as is a person who conducts a census, is that in Roman society the public official called the "censor" had both of these jobs.
Hollerith built a company, the Tabulating Machine Company which, after a few buyouts, eventually became International Business Machines, known today as IBM. IBM grew rapidly and punched cards became ubiquitous. Your gas bill would arrive each month with a punch card you had to return with your payment. This punch card recorded the particulars of your account: your name, address, gas usage, etc. (I imagine there were some "hackers" in these days who would alter the punch cards to change their bill). As another example, when you entered a toll way (a highway that collects a fee from each driver) you were given a punch card that recorded where you started and then when you exited from the toll way your fee was computed based upon the miles you drove. When you voted in an election the ballot you were handed was a punch card. The little pieces of paper that are punched out of the card are called "chad" and were thrown as confetti at weddings. Until recently all Social Security and other checks issued by the Federal government were actually punch cards. The check-out slip inside a library book was a punch card. Written on all these cards was a phrase as common as "close cover before striking": "do not fold, spindle, or mutilate". A spindle was an upright spike on the desk of an accounting clerk. As he completed processing each receipt he would impale it on this spike. When the spindle was full, he'd run a piece of string through the holes, tie up the bundle, and ship it off to the archives. You occasionally still see spindles at restaurant cash registers.
operating system
Operating system An Operating system is the most important piece of software running on a computer. Many people have used a computer, all have used Microsoft word or excel for spreadsheets. These are application program. The application program interacts with the Operating system, and now the Operating system is a program that is run when ether a computer is switched on. We need an Operating system installed on our computers or else it will become very inhospitable. We all use digital computers and everything is represented by numbers. The Operating system provides help to do productive work such as writing letters or surfing the web. It an provide the techniques to overlap input and output with processing, while a program is receiving an input from the keyboard, another can e writing to a file whilst another processes. This means the efficiency of the computer is improved. An Operating system is software which makes .A typical 3.5 inch disk holds 1.44 MB of data. Before data can be stored on a disk it must be formatted. This creates a magnetic map of the disk surface so that data can be either read from the disk or written onto the disk quickly. Because of their high speed to access stored data, floppy disks are probably the most common way of storing data. Software Software is the name for all of the programs that are needed to run the hardware. Without software the hardware would be useless. A software package is a program or set of programs together with a full set of documentation. Software programs are usually one of two types: Systems programs or applications. Systems programs - A systems program is one of the programs that controls the computer system or provides facilities. Computer manufacturers often supply systems software when you purchase your computer. Utility programs and the operating system are both examples of systems programs. Operating systems - A computer is a complex assembly of different components. These components include processing and memory chips, input and output devices and backing stores. All these components have different functions and run at different speeds. The operating system is responsible for the co-ordination of all the components so that the computer as a whole works efficiently. It also provides the communication between the computer and the user enabling the computer to be used in the most efficient way possible. The operating system also allocates memory space to programs and shares processor time among them. Applications packages - An applications package is a complete program, or set of programs, to carry out a task such as stock control or wage calculations. Applications packages can be bespoke (made to order) or off the shelf (ready made). Bespoke - Bespoke application packages are custom made or designed for an individual or company such as an information system for a large insurance company. Bespoke packages are widely used in the business world because they provide an efficient way of working as they can be tailored to specific goals and requirements. However, this type of applications package is very expensive and requires professional programmers to create it and possibly support staff to be on hand in case of any problems. Off the shelf - Off the shelf application packages are those that can be bought and are ready to use as soon as they have been installed on your computer. They are relatively cheap to purchase compared to bespoke packages and can be customised to meet requirements. There is a wide range of off the shelf packages available and they can be used to carry out a huge number of tasks. Review of MusicMatch Jukebox MusicMatch Jukebox is a software package, which enables you to listen to and record MP3's. The basic package can be downloaded from the Internet for free and the plus package can be purchased for around £15. MusicMatch is frequently updating its software and updates can be downloaded free of charge. When you first install MusicMatch it gives you the option of searching through your hard drive and putting all of the MP3's into the music library. This I found to be a useful feature that other MP3 packages do not usually have. You can also classify all of your music as each MP3 can be given a tag, which includes the artists name, song title, album and genre. MusicMatch will even connect to the Internet to look up the tags for you. This facility enables you choose a particular genre and let the computer select the music for you. MusicMatch also makes the recording of MP3's very easy and also relatively fast. However, I have discovered that MusicMatch isn't without problems one of which is the annoying fact that it always try's to connect to the Internet and it is slightly confusing to use. It has no real music download (from the Internet) facility therefore the only time I use the Internet with this package is for listening to the radio. Finally, there is also no audio equaliser, which I think would be a nice addition to the package. Review of Windows Media Player Windows media player can be used for playing, recording and organising multimedia on your computer. It can be used to listen/record MP3's, watch films and listen to the radio on the Internet. It can support many different file formats including: Real Audio, Real Video, MPEG 1, MPEG 2, MPEG 3 (MP3). Windows Media Player is extremely easy to use as it has the same format as other Microsoft programs, such as MS Word where you go to the File menu and then to Open. Similar to MusicMatch it is capable of searching your hard drive for media files and adding them to your library but unlike MusicMatch it also has a 10-band audio equaliser which I find gives it a much better sound quality. Windows Media Player usually comes installed on your computer if you have a Microsoft Operating System. If you do not, or you have an older version it can be downloaded for free from the Internet. Windows Media Player also has visual displays that can be played on your screen whilst you are listening to music. I think that the downside of this package is that you are not able to easily play particular genres or 'moods' of music. Instead you have to compile play lists, which is time consuming. Also, you can only connect to a limited number of radio stations which I think has something to do with it only supporting Microsoft format files. TASK 2 - What is a computer network? Explain the term LAN & WAN Computers that are connected over a network make the exchange of information easier and faster. The most basic type of network would be two computers linked together by a cable and communicating with each other. Local Area Network (LAN) - A local area network is a ...
computer hardware
In today’s technology, information system is a vital ingredient to the needs of any organization. The objectives of computer hardware is to support the organization needs to process information data in a timely manner and as accurate as possible. The accuracy of data is very important and computer hardware components include devices that perform the following functions: data input, data output, data storage, and data processing. These devices can accommodate the organization needs to input data in the information system.
The accuracy of data input is very important. The process must be efficient and timely input is important to meet the organization needs. “Organizations use a variety of measures to gauge processing speed.” (Stair & Reynolds, 2006, p. 50 chap. 2). The method of data input will depend on the situation. For example, printed questionnaires will probably be best to use the optical data readers because most questionnaires are complete by filling the bubbles with pencil. Using the optical data readers will make the questionnaire information to be input into the system faster and efficient for managers to have immediate access to the information. The voice recognition may be more appropriate for a telephone survey because this device recognize human speech. Using the voice recognition will save time for the organization as well as the customer doing the survey. The magnetic ink character recognition is best to use in bank checks to help the overload and process the checks quickly and efficient. The point of sale device is more efficient during retail tags in which this device computes total charges including taxes making the process faster and efficient.
The convenience and quality of the output is also important. The method of output use, like on the input, also depends in the situation. The organic light-emitting diodes are probably the best device to use for color photograph because it can provide sharper and brighter...
The accuracy of data input is very important. The process must be efficient and timely input is important to meet the organization needs. “Organizations use a variety of measures to gauge processing speed.” (Stair & Reynolds, 2006, p. 50 chap. 2). The method of data input will depend on the situation. For example, printed questionnaires will probably be best to use the optical data readers because most questionnaires are complete by filling the bubbles with pencil. Using the optical data readers will make the questionnaire information to be input into the system faster and efficient for managers to have immediate access to the information. The voice recognition may be more appropriate for a telephone survey because this device recognize human speech. Using the voice recognition will save time for the organization as well as the customer doing the survey. The magnetic ink character recognition is best to use in bank checks to help the overload and process the checks quickly and efficient. The point of sale device is more efficient during retail tags in which this device computes total charges including taxes making the process faster and efficient.
The convenience and quality of the output is also important. The method of output use, like on the input, also depends in the situation. The organic light-emitting diodes are probably the best device to use for color photograph because it can provide sharper and brighter...
computer software
From sixth graders to first-year medical students we get consistently good results," said Thomas K. Landauer, a CU-Boulder psychology professor who has worked on the technology behind the program for 10 years. "It's ready."
The computer software, called Intelligent Essay Assessor, uses mathematical analysis to measure the quality of knowledge expressed in essays. It is the only automatic method for scoring the knowledge content of essays that has been extensively tested and published in peer-reviewed journals.
The system was developed by Landauer, Darrell Laham, a CU-Boulder doctoral student and Peter W. Foltz, an assistant professor of psychology at NMSU. They will discuss the system Thursday, April 16, during the annual meeting of the American Educational Research Association in San Diego.
"We are continually surprised at how well it works," said Landauer, who started on the project as director of cognitive science research at Bellcore.
The grading system has important implications for assessing student writing and helping students improve their writing, Foltz said. In one of his undergraduate psychology classes at NMSU last fall, Foltz tested a version of the program.
"Students submitted essays to a web page and received immediate feedback about the estimated grade for their essays, and suggestions about what was missing," Foltz said. "Students could revise their essays and resubmit them as many times as they wanted. The students' essays all improved with each revision."
Foltz also gave students the choice of having their essays graded by a human or by the computer. "They all chose to have the computer do the grading," he said.
Educators laud essay exams because they provide a better assessment of students' knowledge than other types of tests. A huge drawback is that the tests are time-consuming and difficult to grade fairly and accurately, particularly for large classes or nationally administered exams.
But computer-based evaluations of student writing are becoming increasingly feasible because of the growing numbers of students who write using computers. The researchers have applied for a patent on their software.
The new system requires a computer with about 20 times the memory of an ordinary PC to do the statistical analysis that it needs to "understand" essays. It uses Latent Semantic Analysis, a new type of artificial intelligence that is much like a neural network. "In a sense, it tries to mimic the function of the human brain," Laham said.
First the software program is "fed" information about a topic in the form of 50,000 to 10 million words from on-line textbooks or other sources. It learns from the text and then assigns a mathematical degree of similarity or "distance" between the meaning of each word and any other word. This allows students to use different words that mean the same thing and receive the same score. For example, they could use "physician" instead of "doctor."
The program then evaluates essays in two primary ways. The first is for a teacher or professor to grade enough essays to provide a good statistical sample and then use the software to grade the remainder.
"It takes the combination of words in the student essay and computes its similarity to the combination of words in the comparison essays," Laham said. The student then receives the same grade as the human-graded essays to which it is most closely matched.
"The program has perfect consistency in grading -- an attribute that human graders almost never have," Laham said. "The system does not get bored, rushed, sleepy, impatient or forgetful." In one test, both the Intelligent Essay Assessor and faculty members graded essays from 500 psychology students at CU-Boulder. "The correlation between the two scores was very high -- it was the same correlation as if two humans were reading them," Landauer said.
The software only evaluates knowledge content and is not designed to grade stylistic considerations like grammar and spelling, researchers said. Existing programs already can do those functions.
The beginning of the 1990's is marked by the era of computers. Everywhere we look ,we see computers. They have become an essential part of our every day life. If the world's computer systems were turned off even for a short amount of time, unimaginable disasters would occur. We can surely say that today's world is heading into the future with the tremendous influence of computers. These machines are very important players in the game, the key to the success however so is proper software (computer programs). It is the software that enables computers to perform a certain tasks. Educational systems in developed countries realize the importance of computers in the future world and therefore, emphasize their use in schools and second....
The computer software, called Intelligent Essay Assessor, uses mathematical analysis to measure the quality of knowledge expressed in essays. It is the only automatic method for scoring the knowledge content of essays that has been extensively tested and published in peer-reviewed journals.
The system was developed by Landauer, Darrell Laham, a CU-Boulder doctoral student and Peter W. Foltz, an assistant professor of psychology at NMSU. They will discuss the system Thursday, April 16, during the annual meeting of the American Educational Research Association in San Diego.
"We are continually surprised at how well it works," said Landauer, who started on the project as director of cognitive science research at Bellcore.
The grading system has important implications for assessing student writing and helping students improve their writing, Foltz said. In one of his undergraduate psychology classes at NMSU last fall, Foltz tested a version of the program.
"Students submitted essays to a web page and received immediate feedback about the estimated grade for their essays, and suggestions about what was missing," Foltz said. "Students could revise their essays and resubmit them as many times as they wanted. The students' essays all improved with each revision."
Foltz also gave students the choice of having their essays graded by a human or by the computer. "They all chose to have the computer do the grading," he said.
Educators laud essay exams because they provide a better assessment of students' knowledge than other types of tests. A huge drawback is that the tests are time-consuming and difficult to grade fairly and accurately, particularly for large classes or nationally administered exams.
But computer-based evaluations of student writing are becoming increasingly feasible because of the growing numbers of students who write using computers. The researchers have applied for a patent on their software.
The new system requires a computer with about 20 times the memory of an ordinary PC to do the statistical analysis that it needs to "understand" essays. It uses Latent Semantic Analysis, a new type of artificial intelligence that is much like a neural network. "In a sense, it tries to mimic the function of the human brain," Laham said.
First the software program is "fed" information about a topic in the form of 50,000 to 10 million words from on-line textbooks or other sources. It learns from the text and then assigns a mathematical degree of similarity or "distance" between the meaning of each word and any other word. This allows students to use different words that mean the same thing and receive the same score. For example, they could use "physician" instead of "doctor."
The program then evaluates essays in two primary ways. The first is for a teacher or professor to grade enough essays to provide a good statistical sample and then use the software to grade the remainder.
"It takes the combination of words in the student essay and computes its similarity to the combination of words in the comparison essays," Laham said. The student then receives the same grade as the human-graded essays to which it is most closely matched.
"The program has perfect consistency in grading -- an attribute that human graders almost never have," Laham said. "The system does not get bored, rushed, sleepy, impatient or forgetful." In one test, both the Intelligent Essay Assessor and faculty members graded essays from 500 psychology students at CU-Boulder. "The correlation between the two scores was very high -- it was the same correlation as if two humans were reading them," Landauer said.
The software only evaluates knowledge content and is not designed to grade stylistic considerations like grammar and spelling, researchers said. Existing programs already can do those functions.
The beginning of the 1990's is marked by the era of computers. Everywhere we look ,we see computers. They have become an essential part of our every day life. If the world's computer systems were turned off even for a short amount of time, unimaginable disasters would occur. We can surely say that today's world is heading into the future with the tremendous influence of computers. These machines are very important players in the game, the key to the success however so is proper software (computer programs). It is the software that enables computers to perform a certain tasks. Educational systems in developed countries realize the importance of computers in the future world and therefore, emphasize their use in schools and second....
Subscribe to:
Comments (Atom)