Counter-Strike 1.6 – EVH
Counter-Strike 1.6 – EVH
Download this Counter Strike – click here
In the sixth generation of video game consoles, Sega exited the hardware market, Nintendo fell behind, Sony solidified its lead in the industry, and Microsoft developed their first gaming console.
The generation opened with the launch of the Sega Dreamcast in 1998. As the first console with a built-in modem for Internet support and online play, it was initially successful, but sales and popularity would start to fall. This has been attributed to Sega’s damaged reputation from the relative failures of the 32X and Saturn, copyright infringement, and the huge anticipation for the upcoming PlayStation 2. The Dreamcast’s library contains many titles considered creative and innovative, including the Shenmue series which are regarded as a major step forward for 3D open-world gameplay and has introduced the quick time event mechanic in its modern form. Production for the console would discontinue in most markets by 2002 and it would be Sega’s final console before it reorganized its business as a third party game provider only, partnering primarily with its old rival Nintendo.
The second release of the generation was Sony’s PlayStation 2 (PS2), which featured DVD-based game discs with 4.7GB capacity, increased processor and graphics ability over its predecessor including progressive-scan component video connections, built-in 4-player connection, available Ethernet adapter (which became built-in with the winter 2004 release of the “slimline” PS2 chassis), and the ability to play DVD movies and audio CDs, eliminating the need for a separate DVD player and making the PS2 a complete home entertainment console. The console was highly successful during the generation.
Nintendo followed a year later with the GameCube (code-named “Dolphin” while in development), the company’s first optical disc-based console. While it had the component-video ability of its contemporaries, the GameCube suffered in several ways compared to Sony’s PS2. First, the PS2’s high anticipation and one-year head start gained it player and developer attention before the GCN’s release. As a result, the GameCube had less third-party backing and very few third-party exclusives, mostly from Nintendo-faithful studios such as the now-defunct Rare Ltd. and Midway Games. Cross-platform giants like Capcom, Electronic Arts and Activision released most of their GameCube titles on other consoles as well, while Square Enix released high-demand PS2 exclusives. The GCN’s game disc capacity was a third that of the PS2’s full-size DVD disks, forcing a few games to be released on multiple discs and most titles to compromise on texture quality and other features of GameCube games, when other platforms had no such limitations on their versions. It had no backward compatibility with the now-obsolete cartridges of the N64. It was a dedicated game console, with the optical drive being too small to hold a full-size CD or DVD. Lastly, The GameCube was hindered by a not-undeserved reputation for being a “kid’s console”, due to its initial launch color scheme and lack of mature-content games which the current market appeared to want. Though T- and M-rated titles did exist on the GameCube, the almost all GCN games were E-rated and mostly cartoon-style in their art design.
Before the end of 2001, Microsoft Corporation, best known for its Windows operating system and its professional productivity software, entered the console market with the Xbox. Based on Intel’s Pentium III CPU, the console used a great deal of PC technology to leverage its internal development, making games for PC easily portable to the Xbox. To gain market share and maintain its toehold in the market, Microsoft reportedly sold the Xbox at a significant loss and concentrated on drawing profit from game developing and publishing. Shortly after its release in November 2001 Bungie Studio’s Halo: Combat Evolved instantly became the driving point of the Xbox’s success, and the Halo series would go on to become one of the most successful console shooter franchises of all time. By the end of the generation, the Xbox had drawn even with the Nintendo GameCube in sales globally, but since nearly all of its sales were in North America, it pushed Nintendo into third place in the American market.
In 2001 Grand Theft Auto III was released, popularizing open world games by using a non-linear style of gameplay. It was very successful both critically and commercially and is considered a huge milestone in gaming. It was also yet another set piece in the debate over video game violence and adult content, with advocacy groups decrying the series’ glorification of prostitution, the mafia, and violence, including that against first responders such as police and EMS.
Nintendo still dominated the handheld gaming market during this period. The Game Boy Advance, released in 2001, maintained Nintendo’s market position with a high-resolution, full-color LCD screen and 16-bit processor allowing ports of SNES games and simpler companions to N64 and GameCube games. Finnish cellphone maker Nokia entered the handheld scene with the N-Gage, but it failed to win a significant following.
In January 2013, Sony announced that the PlayStation 2 had been discontinued worldwide, ending the sixth generation.
Return of alternative controllers
One significant feature of this generation was various manufacturers’ renewed fondness for add-on peripheral controllers. While alternative controllers weren’t new (Nintendo supported several for the NES and PC games have long supported driving wheels and aircraft joysticks), console games built around them became some of the biggest hits of the decade. Konami sold a soft-plastic mat version of its foot controls for its Dance Dance Revolution franchise in 1998. Sega came out with Samba de Amigo’s maraca controllers. Nintendo’s bongo controller worked with a few games in its Donkey Kong franchise. Publisher RedOctane introduced Guitar Hero and its distinctive guitar-shaped controllers for the PlayStation 2. Meanwhile, Sony developed the EyeToy peripheral, a camera that could detect player movement, for the PlayStation 2. This would further be developed into whole-body tracking technologies such as Sony’s PlayStation Move and Microsoft’s Kinect.
As affordable broadband Internet connectivity spread, many publishers turned to online gaming as a way of innovating. Massively multiplayer online role-playing games (MMORPGs) featured significant titles for the PC market like RuneScape, World of Warcraft, EverQuest, and Ultima Online. Historically, console-based MMORPGs have been few in number due to the lack of bundled Internet connectivity options for the platforms. This made it hard to establish a large enough subscription community to justify the development costs. The first significant console MMORPGs were Phantasy Star Online on the Sega Dreamcast (which had a built in modem and aftermarket Ethernet adapter), followed by Final Fantasy XI for the Sony PlayStation 2 (an aftermarket Ethernet adapter was shipped to support this game). Every major platform released since the Dreamcast has either been bundled with the ability to support an Internet connection or has had the option available as an aftermarket add-on. Microsoft’s Xbox also had its own online gaming service called Xbox Live. Xbox Live was a huge success and proved to be a driving force for the Xbox with games like Halo 2 that were highly popular.
In the early 2000s (decade), mobile games had gained mainstream popularity in Japanese mobile phone culture, years before the United States or Europe. By 2003, a wide variety of mobile games were available on Japanese phones, ranging from puzzle games and virtual pet titles that use camera phone and fingerprint scanner technologies to 3D games with PlayStation-quality graphics. Older arcade-style games became very popular on mobile phones, which were an ideal platform for arcade-style games designed for shorter play sessions. Namco began making attempts to introduce mobile gaming culture to Europe in 2003.
Mobile gaming interest was raised when Nokia launched its N-Gage phone and handheld gaming platform in 2003. While about two million handsets were sold, the product line wasn’t seen as a success and was withdrawn from Nokia’s lineup. Meanwhile, many game developers had noticed that more advanced phones had color screens and enough memory and processing power to do reasonable gaming. Mobile phone gaming revenues passed 1 billion dollars in 2003, and passed 5 billion dollars in 2007, accounting for a quarter of all videogaming software revenues. More advanced phones came to the market such as the N-Series smartphone by Nokia in 2005 and the iPhone by Apple in 2007 which strongly added to the appeal of mobile phone gaming. In 2008 Nokia didn’t revise the N-Gage brand, but published a software library of games to its top-end phones. At Apple’s App Store in 2008, more than half of all applications sold were iPhone games.
Due to the debut of app stores created by Apple and Google, plus the low-cost retail price of downloadable phone apps, games available on smartphones increasingly rival the video game console market. Among the most successful mobile games of this period is Angry Birds, which, released in 2009, reached 2 million downloads within one year. Nintendo announced their intentions for developing more games and content for mobile devices in the early 2010s, while Sega company is also dedicating development resources toward creating more mobile games. Independent small developers are entering the game market en masse by creating mobile games with the hope they will gain popularity with smartphone gaming enthusiasts.
Since 2007, the fast growing mobile market in African countries such as Nigeria and Kenya has also resulted in a growth in mobile game development. Local developers have taken advantage of the recent increase in mobile internet connection in countries where broadband is rarely available and console games are costly, though locally developed applications have difficulty competing against millions of western applications available on the Google Play Store
H. floresiensis, which lived from approximately 100,000 to 12,000 years before present, has been nicknamed hobbit for its small size, possibly a result of insular dwarfism. H. floresiensis is intriguing both for its size and its age, being an example of a recent species of the genus Homo that exhibits derived traits not shared with modern humans. In other words, H. floresiensis shares a common ancestor with modern humans, but split from the modern human lineage and followed a distinct evolutionary path. The main find was a skeleton believed to be a woman of about 30 years of age. Found in 2003, it has been dated to approximately 18,000 years old. The living woman was estimated to be one meter in height, with a brain volume of just 380 cm3 (considered small for a chimpanzee and less than a third of the H. sapiens average of 1400 cm3).
However, there is an ongoing debate over whether H. floresiensis is indeed a separate species. Some scientists hold that H. floresiensis was a modern H. sapiens with pathological dwarfism. This hypothesis is supported in part, because some modern humans who live on Flores, the Indonesian island where the skeleton was found, are pygmies. This, coupled with pathological dwarfism, could have resulted in a significantly diminutive human. The other major attack on H. floresiensis as a separate species is that it was found with tools only associated with H. sapiens.
The hypothesis of pathological dwarfism, however, fails to explain additional anatomical features that are unlike those of modern humans (diseased or not) but much like those of ancient members of our genus. Aside from cranial features, these features include the form of bones in the wrist, forearm, shoulder, knees, and feet. Additionally, this hypothesis fails to explain the find of multiple examples of individuals with these same characteristics, indicating they were common to a large population, and not limited to one individual.
Main article: Archaic humans
H. sapiens (the adjective sapiens is Latin for “wise” or “intelligent”) have lived from about 250,000 years ago to the present. Between 400,000 years ago and the second interglacial period in the Middle Pleistocene, around 250,000 years ago, the trend in intra-cranial volume expansion and the elaboration of stone tool technologies developed, providing evidence for a transition from H. erectus to H. sapiens. The direct evidence suggests there was a migration of H. erectus out of Africa, then a further speciation of H. sapiens from H. erectus in Africa. A subsequent migration (both within and out of Africa) eventually replaced the earlier dispersed H. erectus. This migration and origin theory is usually referred to as the “recent single-origin hypothesis” or “out of Africa” theory. Current evidence does not preclude some multiregional evolution or some admixture of the migrant H. sapiens with existing Homo populations. This is a hotly debated area of paleoanthropology.
Current research has established that humans are genetically highly homogenous; that is, the DNA of individuals is more alike than usual for most species, which may have resulted from their relatively recent evolution or the possibility of a population bottleneck resulting from cataclysmic natural events such as the Toba catastrophe. Distinctive genetic characteristics have arisen, however, primarily as the result of small groups of people moving into new environmental circumstances. These adapted traits are a very small component of the Homo sapiens genome, but include various characteristics such as skin color and nose form, in addition to internal characteristics such as the ability to breathe more efficiently at high altitudes.
H. sapiens idaltu, from Ethiopia, is an extinct sub-species from about 160,000 years ago who is argued to be the direct ancestor of all modern humans.
The use of tools has been interpreted as a sign of intelligence, and it has been theorized that tool use may have stimulated certain aspects of human evolution, especially the continued expansion of the human brain. Paleontology has yet to explain the expansion of this organ over millions of years despite being extremely demanding in terms of energy consumption. The brain of a modern human consumes about 13 watts (260 kilocalories per day), a fifth of the body’s resting power consumption. Increased tool use would allow hunting for energy-rich meat products, and would enable processing more energy-rich plant products. Researchers have suggested that early hominins were thus under evolutionary pressure to increase their capacity to create and use tools.
Precisely when early humans started to use tools is difficult to determine, because the more primitive these tools are (for example, sharp-edged stones) the more difficult it is to decide whether they are natural objects or human artifacts. There is some evidence that the australopithecines (4 Ma) may have used broken bones as tools, but this is debated.
It should be noted that many species make and use tools, but it is the human genus that dominates the areas of making and using more complex tools. The oldest known tools are the Oldowan stone tools from Ethiopia, 2.5–2.6 million years old. A Homo fossil was found near some Oldowan tools, and its age was noted at 2.3 million years old, suggesting that maybe the Homo species did indeed create and use these tools. It is a possibility but does not yet represent solid evidence. The Third metacarpal styloid process enables the hand bone to lock into the wrist bones, allowing for greater amounts of pressure to be applied to the wrist and hand from a grasping thumb and fingers. It allows humans the dexterity and strength to make and use complex tools. This unique anatomical feature separates humans from apes and other nonhuman primates, and is not seen in human fossils older than 1.8 million years.
Bernard Wood noted that Paranthropus co-existed with the early Homo species in the area of the “Oldowan Industrial Complex” over roughly the same span of time. Although there is no direct evidence which identifies Paranthropus as the tool makers, their anatomy lends to indirect evidence of their capabilities in this area. Most paleoanthropologists agree that the early Homo species were indeed responsible for most of the Oldowan tools found. They argue that when most of the Oldowan tools were found in association with human fossils, Homo was always present, but Paranthropus was not.
In 1994, Randall Susman used the anatomy of opposable thumbs as the basis for his argument that both the Homo and Paranthropus species were toolmakers. He compared bones and muscles of human and chimpanzee thumbs, finding that humans have 3 muscles which are lacking in chimpanzees. Humans also have thicker metacarpals with broader heads, allowing more precise grasping than the chimpanzee hand can perform. Susman posited that modern anatomy of the human opposable thumb is an evolutionary response to the requirements associated with making and handling tools and that both species were indeed toolmakers.
Stone tools are first attested around 2.6 Million years ago, when H. habilis in Eastern Africa used so-called pebble tools, choppers made out of round pebbles that had been split by simple strikes. This marks the beginning of the Paleolithic, or Old Stone Age; its end is taken to be the end of the last Ice Age, around 10,000 years ago. The Paleolithic is subdivided into the Lower Paleolithic (Early Stone Age), ending around 350,000–300,000 years ago, the Middle Paleolithic (Middle Stone Age), until 50,000–30,000 years ago, and the Upper Paleolithic, (Late Stone Age), 50,000-10,000 years ago.
Archaeologists working in the Great Rift Valley in Kenya claim to have discovered the oldest known stone tools in the world. Dated to around 3.3 million years ago, the implements are some 700,000 years older than stone tools from Ethiopia that previously held this distinction.
The period from 700,000–300,000 years ago is also known as the Acheulean, when H. ergaster (or erectus) made large stone hand axes out of flint and quartzite, at first quite rough (Early Acheulian), later “retouched” by additional, more-subtle strikes at the sides of the flakes. After 350,000 BP the more refined so-called Levallois technique was developed, a series of consecutive strikes, by which scrapers, slicers (“racloirs”), needles, and flattened needles were made. Finally, after about 50,000 BP, ever more refined and specialized flint tools were made by the Neanderthals and the immigrant Cro-Magnons (knives, blades, skimmers). In this period they also started to make tools out of bone.