Archive for March 13th, 2012

March 13, 2012

Reversible Computing

Reversible computing is a model of computing where the computational process to some extent is reversible, i.e., time-invertible. There are two major, closely related, types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility. A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic.

These circuits are also referred to as charge recovery logic or adiabatic computing. Although in practice no nonstationary physical process can be exactly physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility. The motivation for the study of technologies aimed at actually implementing reversible computing is that they offer what is predicted to be the only potential way to improve the energy efficiency of computers beyond the fundamental von Neumann-Landauer limit.

read more »

March 13, 2012

Multi-core Processor

tilera

intel

A multi-core CPU is a single computing component with two or more independent actual processors (called ‘cores’), which are the units that read and execute program instructions. Multiple cores can run multiple instructions at the same time, increasing overall speed for programs amenable to parallel computing.

Processors were originally developed with only one core. After a certain point, multi-processor techniques are no longer efficient, largely because of issues with congestion in supplying instructions and data to the many processors. The threshold is roughly in the range of several tens of cores; above this threshold network on chip technology is advantageous. Tilera processors feature a switch in each core to route data through an on-chip mesh network to lessen the data congestion, enabling their core count to scale up to 100 cores.

read more »

March 13, 2012

Embarrassingly Parallel

gridcoin

In parallel computing, an embarrassingly parallel workload is one for which little or no effort is required to separate the problem into a number of parallel tasks. This is often the case where there exists no dependency (or communication) between those parallel tasks.

They are easy to perform on server farms which do not have any of the special infrastructure used in a true supercomputer cluster. They are thus well suited to large, internet based distributed platforms such as BOINC. A common example of an embarrassingly parallel problem lies within graphics processing units (GPUs) for tasks such as 3D projection, where each pixel on the screen may be rendered independently.

March 13, 2012

Amdahl’s Law

amdahls law

Amdahl’s law, named after computer architect Gene Amdahl, is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors. The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20x.

Amdahl’s law is often conflated with the law of diminishing returns (the tendency for a continuing application of effort or skill toward a particular project or goal to decline in effectiveness after a certain level of result has been achieved). Amdahl’s law does represent the law of diminishing returns if you are considering what sort of return you get by adding more processors to a machine, if you are running a fixed-size computation that will use all available processors to their capacity. Each new processor you add to the system will add less usable power than the previous one. Each time you double the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit. This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth, if they do not scale with the number of processors; however, taking into account such bottlenecks would tend to further demonstrate the diminishing returns of only adding processors.

Tags:
March 13, 2012

CPU Power Dissipation

CPU power dissipation is the process in which central processing units (CPUs) consume electrical energy, and dissipate this energy both by the action of the switching devices contained in the CPU (such as transistors or vacuum tubes) and by the energy lost in the form of heat due to the impedance of the electronic circuits. Designing CPUs that perform these tasks efficiently without overheating is a major consideration of nearly all CPU manufacturers to date.

Some implementations of CPUs use very little power, for example, the CPUs in mobile phones often use just a few hundred milliwatts of electricity. Some Microcontrollers, used in embedded systems may use a few milliwatts. In comparison, CPUs in general purpose personal computers, such as desktops and laptops, dissipate significantly more power because of their higher complexity and speed. These microelectronic CPUs may consume power in the order of a few watts to hundreds of watts. Historically, early CPUs implemented with vacuum tubes consumed power on the order of many kilowatts.

read more »

March 13, 2012

Hello World

c

A Hello world program is a computer program that outputs ‘Hello, world’ on a display device. Because it is typically one of the simplest programs possible in most programming languages, it is by tradition often used to illustrate to beginners the most basic syntax of a programming language, or to verify that a language or system is operating correctly (called a sanity test). In a device that does not display text, a simple program to produce a signal, such as turning on an LED, is often substituted for ‘Hello world’ as the introductory program. Itis also used by computer hackers as a proof of concept that arbitrary code can be executed through an exploit where the system designers did not intend code to be executed—for example, on Sony’s PlayStation Portable. This is the first step in using homemade content (‘home brew’) on such a device.

While small test programs existed since the development of programmable computers, the tradition of using the phrase ‘Hello, world!’ as a test message was influenced by an example program in the seminal book ‘The C Programming Language.’ The example program from that book prints ‘hello, world’ (without capital letters or exclamation mark), and was inherited from a 1974 Bell Laboratories internal memorandum by Brian Kernighan, ‘Programming in C: A Tutorial.’

March 13, 2012

Interpersonal Perception

youjustgetme

Interpersonal perception is an area of research in social psychology which examines the beliefs that interacting people have about each other. This area differs from social cognition and person perception by being interpersonal rather than intrapersonal, and thus requiring the interaction of at least two actual people. People more accurately perceive extraversion and conscientiousness in strangers than they do the other personality domains. A 5-second interaction tells you as much as 15 minutes on these domains, and video tells you more than audio alone.

Viewing peoples’ personal websites or ‘online profiles’ (as on Facebook or a dating website) can make people as knowledgeable about their conscientiousness and open-mindedness as their long-term friends. The question of whether social-networking sites lead to accurate first-impressions has inspired Sam Gosling of the University of Texas at Austin and David Evans formerly of Classmates.com to launch an ambitious project to measure the accuracy of first-impressions worldwide (YouJustGetMe.com).

March 13, 2012

Thin-slicing

Thin-slicing is a term used in psychology and philosophy to describe the ability to find patterns in events based only on ‘thin slices,’ or narrow windows, of experience. The term seems to have been coined in 1992 by Nalini Ambady and Robert Rosenthal in a paper in the ‘Psychological Bulletin.’ Many different studies have shown indication that brief observations can be used to assess outcomes, at levels higher than expected by chance. Once comparing these observations of less than five minutes, to greater than five minutes the data showed no significant change, thus implying that observations made within the first few minutes were unchanging.

One of the first series conducted by James Bugental and his colleagues showed that parents expectancies, identified from brief clips of their tone, are related to their children’s behavior process. The tone of a mother with a normal child and the tone of a mother with behavior problems differed significantly. These conceptions provide an underlying basis that there actually is an ability to judge from brief observations. Research in classrooms has shown that judges can distinguish biased teachers from unbiased teachers along with ‘differential teacher expectancies’ simply from brief clips of teachers’ behaviors. Likewise, research in the courtroom has shown that brief experts of judges’ instructors to jurors in trials, raters could predict the judge’s expectations for the trial.

read more »

March 13, 2012

Wired Glove

p5

A wired glove (sometimes called a dataglove or cyberglove) is an input device for human–computer interaction worn like a glove. Various sensor technologies are used to capture physical data such as bending of fingers. Often a motion tracker, such as a magnetic tracking device or inertial tracking device, is attached to capture the global position/rotation data of the glove. These movements are then interpreted by the software that accompanies the glove, so any one movement can mean any number of things.

Gestures can then be categorized into useful information, such as to recognize Sign Language or other symbolic functions. Expensive high-end wired gloves can also provide haptic feedback, which is a simulation of the sense of touch. This allows a wired glove to also be used as an output device. Traditionally, wired gloves have only been available at a huge cost, with the finger bend sensors and the tracking device having to be bought separately. Wired gloves are often used in virtual reality environments.

read more »

Tags:
March 13, 2012

Power Glove

Power glove

The Power Glove is a controller accessory for the Nintendo Entertainment System, and the first peripheral interface controller to recreate human hand movements on a television or computer screen in real time. The Power Glove was not popular and was criticized for its imprecise and difficult-to-use controls. The Power Glove was originally released in 1989. Though it was an officially licensed product, Nintendo was not involved in the design or release of this accessory. Rather, it was designed by Grant Goddard and Samuel Cooper Davis for Abrams Gentile Entertainment (AGE), made by Mattel in the United States and PAX in Japan.

Additional development was accomplished through the efforts of Thomas G. Zimmerman and Jaron Lanier, a virtual reality pioneer responsible for co-developing and commercializing the DataGlove who had made a failed attempt at a similar design for Nintendo earlier. The Power Glove and DataGlove were based on Zimmerman’s instrumented glove. Zimmerman built the first prototype that demonstrated finger flex measurement and hand position tracking using a pair of ultrasonic transmitters. His original prototype used optical flex sensors to measure finger bending which were replaced with less expensive carbon-based flex sensors by the AGE team.

read more »

Tags: ,
March 13, 2012

Virtual Boy

virtual boy

The Virtual Boy was a video game console developed and manufactured by Nintendo. It was the first video game console that was supposed to be capable of displaying ‘true 3D graphics’ out of the box. Whereas most video games use monocular cues to achieve the illusion of three dimensions on a two-dimensional screen, The Virtual Boy creates an illusion of depth through the effect known as parallax.

In a manner similar to using a head-mounted display, the user looks into an eyepiece made of neoprene on the front of the machine, and then an eyeglass-style projector allows viewing of the monochromatic (in this case, red) image. It was released in 1995 in Japan and North America at a price of around US$180. It met with a lukewarm reception that was unaffected by continued price drops. Nintendo discontinued it the following year.

read more »

Tags: , ,